Rationality Quotes August 2013
Another month has passed and here is a new rationality quotes thread. The usual rules are:
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you’d like to revive an old quote from one of those sources, please do so here.
No more than 5 quotes per person per monthly thread, please.
- 15 Oct 2013 21:10 UTC; 8 points) 's comment on How habits work and how you may control them by (
- 16 Dec 2013 4:26 UTC; 5 points) 's comment on Open Thread, April 1-15, 2013 by (
-- Atul Gawande, The Checklist Manifesto
I concur in the general case. But I would suggest the people complaining work in computers. I’m a Unix sysadmin; my job description is to automate myself out of existence. Checklist=shell script=JOB DONE, NEXT TASK TO ELIMINATE.
It turns out, thankfully, that work expands to fill the sysadmins available. Because even in the future, nothing works. I fully expect to be able to work to 100 if I want to.
Procrastination and The Extended Will 2009
Am I missing something? Why is this quote so popular? Is there something more to it than “you can do harder sums with a pencil and paper than you can in your head”? Or, I guess “writing stuff down is sometimes useful”.
Pencil and paper is far more reliable than your native memory, and also gives you a way to work on more than seven or so objects at once. Either one would expand your capabilities significantly. Taken together they’re huge, at least when you’re working with things that natural selection hasn’t optimized you for (i.e. yes for abstract math; not so much for facial recognition).
Right—but did anyone not know that?
Facts which seem obvious in retrospect are often less salient than they appear, outside of their native contexts. If I’d been asked to describe humans as computational systems before reading the ancestor, pen and paper probably wouldn’t be one of the things I’d have taken into account.
Yes.
The paper is about the importance of environmental scaffolding on behavior. One of the topics it touches on is akrasia in college students, and it hypothesizes that this is because they lost their usual scaffolding—the routine of their homes, their parents, etc.
The main point is that models of the human mind need to take into account the extent to which humans rely on external objects for computation. Paper and pencil are an extreme example of this.
The quote itself has further implications. In my opinion, this is the single most important technological development. As far as I’m concerned, the “Singularity” began when humans began using things other than their brains to store and process information. That was the beginning of the intelligence explosion, that was the first time we started doing something qualitatively different.
Everyone realizes that writing stuff down is useful, but since we do it all the time not everyone realizes what a big deal it is.. The important insight is that to write is to make the piece of paper a component of your memory and processing power.
Calvin
This phrase was explicitly in my mind back when I was generalizing the “notice confusion” skill.
When you were what?
Rationality 101 ;^)
Mark Rosewater, Kind Acts of Randomness
The chance of averaging exactly 3.5 would be a hell of a lot smaller. The chance of averaging between 3.45 and 3.55 would be larger, though.
Your chance of averaging 3.5 to two significant figures seems quite high indeed, though.
Unless you’re rolling an impractical number of dice for every attack having your attacks do random damage (and not 22-24 like in MMORPGs but 1X-6X) is incredibly random. Even if you are rolling a ridiculous number of dice the game can still be decided by one roll leaving a creature on the board or killing it by one or two points of damage.
What maths says that rolling dice doesn’t make the game more random? Maybe he means the game is overall less random, but I don’t see any argument for that, or reference to evidence of that claim.
If the reason for the game’s failure was that people thought it lacked skill less additional randomness is not a decision to defend even if people were slightly overestimating the randomness.
Having to roll dice in a card game is kind of a slap in the face too. In other card games you draw your cards then make the most of them. There’s 0 randomness to worry about except right when you draw your card or your opponent draws theirs (but you are often happily ignorant of whether they play a card from their hand or that they drew except in certain circumstances.) You can count cards and play based on what is left in your deck, or you know is not in your deck anymore.
Also, unlike miniature games, card games pretty much never start pre-deployed. You start with nothing on the board. If your turn one card kills his turn one card because of a dice roll then he has nothing on the board and you have a creature, giving you some level of control over the board (depends on the game, often quite high) In a miniature game if you kill more of his guys on turn one because of dice rolls you still have an army, though smaller.
Why is this quote upvoted?
The more precise statement of “math says rolling more dice makes things less random” is that if you roll ten six-sided dice and add up the answer, the result will be less random (on its scale) than if you merely roll one six-sided die.
Even more precisely: the outcome of 10d6 is 68.7% likely to lie in the range [30,40], while the outcome of 1d6 is only 33.3% likely to lie in the corresponding range [3,4].
I think the quoted portion of the article addresses exactly this point: people were scared of rolling many dice because this meant lots of randomness, but the math says that the opposite effect occurs.
As to your other points (starting with “kind of a slap in the face”), that is addressed in the article, but not the quoted part. In summary: both rolling dice and drawing cards is random, but there’s a bunch of reasons why the randomness of drawing cards isn’t as frustrating. (It can be frustrating too, though.)
Maybe because of this part:
Rolling 10 dice instead of one makes the game less random. Rolling dice often instead of rarely makes the game more random. This game rolls dice for every attack and not that many. The dude said people complained about lots of dice rolling, not rolling lots of dice. Yeah, obviously if you roll 10 dice its less random than rolling one but what are the chances card game enthusiasts: people “geeky” enough to play star wars TCG don’t understand that basic part of probability? It’s far more likely that people were annoyed at lots of dice rolling, not the amount of dice you roll each time. Which matches the reported complaints of the players. Not that I’d expect an accurate report of the players positions when making excuses for why rolling dice in a card game is a bad idea.
Turkish proverb
http://pbfcomics.com/72/
See also the appendix “Mathematical Formalities And Style” in Probability Theory by E.T. Jaynes.
-- Norwegian folktale.
I don’t understand this rationality quote. Is it about fighting akrasia? Self-hacking to effectively saving money? It clearly describes a method that wouldn’t actually work, and it could work as humour, but what does it mean as a rationality tale?
It’s a cautionary tale about Norwegian food.
It explains lutefisk.
the above is from Wikipedia entry on lutefisk. Believe it or not.
Obviously, that’s why they were all above average!
No, seriously, lutefisk is peasant food. Rich urban types eat smalahovve.
Betcha it’d work. I’m going to set a piece of candy in front of me, work for half an hour, and then put it back, at least once a day for a week.
I sometimes find that telling my Inner Lazy that it can decide—after I’ve done the first one—between whether to continue a series of tasks or to stop and be Lazy gets me to do the whole series of tasks. Despite having noticed explicitly that in practice this ‘decision delay strategy’ leads to the whole series getting done, it still works, and rather seems like tricking my Inner Lazy to transition into/hand the reins over to into my Inner Agent.
Accountability check!
Did you do it? How’d it go?
Did it once, binge-ate the candy a few hours later, bought more candy, binge-ate it again. Trying again in two weeks (or going to the doctor if still prone to binging).
Oh, bother. I wish I’d seen this earlier.
In the context of LW, I took it as an amusing critique of the whole idea of rewarding yourself for behaviours you want to do more .
It’s either a cautionary tale about the dangers of deceiving yourself, or a humorous look at the impossibility of actually doing so.
I took it to be about the hidden complexity of wishes: people often say they want to have more money left at the end of the month when what they actually mean is that they want to have more money left at the end of the month without making themselves miserable in the process, and the easiest solution to the former needn’t be at all a solution to the latter.
It could be used as an effective “How to create an Ugh Field and undermine all future self-discipline attempts” instruction manual. It isn’t a rationality tale. It is confusing that 40 people evidently consider it to be one. (But only a little bit confusing. I usually expect non-rationalist quotes that would be accepted as jokes or inspirational quotes elsewhere to get around 10 upvotes in this thread regardless of merit. That means I’m surprised about the degree of positive reception.)
I don’t think you are correct.
The miser knows each time he will not get the reward, and that he will save on food and drink. That is the real reward, and the rest is a kabuki play he puts on for less-important impulses, to temporarily allow him to restrain them in service of his larger goal. The end pleasure of savings will provide strong positive reinforcement.
This could probably be empirically tested, to see if it is true and would work as a technique. I can imagine a test where someone is promised candy, and anticipates it while acting to fulfill a task, and then is rewarded instead with a dollar. Do they learn disappointment, or does the greater pleasure of money outweigh the candy? This is predicated on the idea that they would prefer the money, of course—you would need to tinker with amounts before the experiment might give useful results.
Also, don’t forget his pleasure at successfully tricking himself. ;-)
Myself, I’d just spend the dollar on candy.
That is not the same thing as the quote. Empirically testing your candy and dollars reward switch would tell us next to nothing about the typical efficacy of the dubious self deception of the miser.
You are telling me I am wrong, but it is not helpful to me unless you explain why I am wrong.
I thought it made sense. As far as I could tell, the original parable has a miser with two desires: the desire for delicious booze and the desire to save money. The latter desire is by far the more important one to him, so he “fools” his desire for booze by promising himself a booze reward, and then reneging on himself each time. In my interpretation, this still results in an overall positive effect for self-discipline, because the happiness of saving money is so much more important to the miser than the disappointment of missing the booze reward.
The truth of whether this would actually work could be seen in an experiment. I tried to think of one with two rewards that satisfy different desires, and tried to think of a way to slightly disappoint the desire for sugar while strongly rewarding the impulse for money, after the completion of the task. Maybe I should specify that people should be hungry before the task, and tested in the future when they are hungry, to see if they are still willing to complete the task?
That’s one way it could play out. It feels like this thinking also allows for it to work, because one might feel good about what got done by means of the trick, which would positively reinforce being tricked. I think the matter isn’t clear cut.
It’s interesting to view this story from source-code-swap Prisoner’s Dilemma / Timeless Decision Theory perspective. This can be a perfect epigraph in an article dedicated to it.
I thought the way he deceived his conscious mind, and never learned, was interesting.
--Professor Farnsworth, Futurama.
The threat of massive perfectly symmetrical violence, on the other hand...
Such a threat can also be effective for asymmetrical violence—no matter which way the asymmetry goes.
-- Graduate student of our group, recognising a level above his own in a weekly progress report
Now I’m curious about the context...
It wasn’t very interesting—some issue of how to make one piece of software talk to the code you’d just written and then store the output somewhere else. Not physics, just infrastructure. But the recognition of the levels was interesting, I thought. Although I do believe “literally five seconds” is likely an exaggeration.
-- Dennis Monokroussos
It’s probably a much more accurate feeling than the opposite one, though...
If I understand why I did something, I want to believe …
That is an interesting observation. For my part I do not experience horror in those circumstances, merely curiosity and uncertainty.
I think it may depend a lot on how well the action fits into your schema for reasonable behavior.
I have mild OCD. Its manifestations are usually unnoticeable to other people, and generally don’t interfere with the ordinary function of my life, but occasionally lead to my engaging in behaviors that no ordinary person would consider worthwhile. The single most extreme manifestation, which still stands out in my memory, was a time when I was playing a video game, and saved my game file, then, doubting my own memory that I had saved it, did it again… and again… and again… until I had saved at least seven times, each time convinced that I couldn’t yet be sure I had saved it “enough.”
Afterwards, I was horrified at my own actions, because what I had just done was too obviously crazy to just handwave away.
I used to do that a lot. I still have to fight the urge to save repeatedly when nothing has changed.
My obsessive compulsions are mostly mental though so it has had so little an impact on my interactions with others that I don’t think it counts as a disorder.
For me it fits my schema of reasonable behavior but also into my schema of “things other people may not like doing for which I don’t consider them irrational”.
Of course, I would rarely consider using a dollar as a bookmark. That would require stopping reading the book once I started it.
It depends on the context, in particular, whether the situation is one where you “must” have a good reason for your actions. Your reaction is appropriate for most ordinary situations; his is appropriate for the context he’s talking about (doing a different movement than than the one you intended in a chess game) and other high stakes situations (blurting an answer you know is wrong in an examination, saying/doing something awkward on a date, making a risky movement driving your car…)
I experience horrible feelings when I humiliate myself or put myself at risk. This phenomenon seems to occur independently of whether I have a good causal model for why I did those things.
OTOH, a good causal model may sometimes enable you to take action so as to not do that thing again.
Harry Potter and the Confirmed Critical, Chapter 6
Can you give a link to this story? It is surprisingly difficult to find.
It is the second book in the series Harry Potter and the Natural 20, which can be found here.
If you put the quote into quotation marks and search Google, it’s the fifth hit.
thank you. This was a ‘duh!’ moment; I haven’t realized it was the 2nd book of the Natural 20.
Ariel Castro (according to The Onion)
“So let’s split the difference and say I should have stopped at two.”
Is this just supposed to be a demonstration of irrationality? Can some one unpack this?
A demonstration of the gray fallacy. The opinions of Ariel Castro are not equidistant from the truth with those of the rest of society, and we don’t find the truth by finding a middle ground between his claims and those of everybody else.
I don’t know how this happened. My comment was supposed to be a reply to:
Ah. I read that one as a reference to the tendency to let tribal affiliation trump realistic evaluation of outcomes.
-Gloria Steinem
I read that as “looking for the right person to fall in love with”. Then the sense is “be the right person for someone else”. But that achieves a different goal entirely, since it doesn’t make the other person right for you.
There are many cases where you want a different person right for the task.
Romantic partners (inherently), trading and working partners (allowing you to specialize in your comparative advantage), deputies and office-holders (allowing you to deputize), soldiers (allowing you to send someone else to their death to win the war).
I assume the original intent of the quote was about romantic partners, where it means, “Instead of searching so hard, make sure to prioritize being awesome for its own sake.”
I was trying to repurpose it to express that action is better than preparing for something to fall into place more generally, and I think it’s appealed to people.
I originally read it as being about politics. We keep thinking that somewhere there’s a candidate worth voting for, and then things will be ok, but instead we should be trying to become the worthy candidates, even if only for local office. Or perhaps toward improving the world generally. Instead of deciding whether to pay Yudkowsky or Bostrom to work on existential risk, we should try applying our own talents. Similar to “[T]he phrase ‘Someone ought to do something’ was not, by itself, a helpful one. People who used it never added the rider ‘and that someone is me’.”
Skimming Gloria Steinem’s biography, I am more confident in this reading.
How isn’t “looking for” or “searching hard” action?
You still have to be the right person to be the right person in a team....?
But you don’t have to be perfect to be the right person in a team, and you don’t have to be “the” right person to be an asset to a team. People with low self-confidence plus low social confidence (plus possibly moralistic ideas about self-reliance) will try to self-improve through their own efforts rather than seeking help, regardless of how much less effective it is, believing they’re not worth someone else’s attention yet, or being afraid of owing someone, or whatever; quotes like Steinem’s reinforce that.
...Maybe. I don’t have any actual sources, so I could be totally wrong. Still, I’m not sure I like the focus on “being” rather than doing things.
Who said anything about being perfect?
And if you’re an asset, you sound prettymuch like the right person to me.
To me the clause “be the right person” sounds very much active/action-based.
Completely putting teamwork aside, most major contributions to humanity were achieved by standing on the shoulders of those who came before.
Unknown
This could be studied empirically.
Difficult. The “distance” is metaphorical, and this probably doesn’t apply when there’s an easy, unambiguous, generally accepted metric. Without that, how do we do the study?
Still, if you have a way, it could be interesting.
-Daniel Kahneman, Thinking, Fast and Slow
On the other hand, the book doesn’t give a citation, and searching for the exact text of the question turns up only that passage. Not sure what to make of that.
Ross & Sicoly (1979). Egocentric Biases in Availability and Attribution.
In the study, the spouses actually estimated their contributions by making a slash mark on a line segment which had endpoints labelled “primarily wife” and “primarily husband”. The experimenters set it up this way, rather than asking for numerical percentages, for ethical reasons. In pilot testing using percentages, they “found that subjects were able to remember the percentages they recorded and that postquestionnaire comparisons of percentages provided a strong source of conflict between the spouses.” (p. 325)
If there is no easy, unambiguous generally accepted metric, that would seem to imply that everyone is a poor judge of distance—making the quote trivially true.
Or thinks he’s got better leverage than you.
Reynolds’ law
Status markers frequently indicate unusual access to resources as well as or even instead of character traits.
Subsidizing status markers dilutes them by making them less common.
How would you tell which factor is more important in the dilution of a status marker?
I can’t parse your post, but that may be partly because I don’t understand how subsidizing status markers would produce character traits to begin with.
Eugine_Nier’s comment has the suppressed premise that status usually results from character traits (alone, or primarily). NancyLebovitz’s response contradicts this suppressed premise.
If you get rich by being exceptionally virtuous, then redistributing the wealth will make it less obvious who is virtuous.
But if you get rich by having a rich dad, then redistributing the wealth will merely make it less obvious who had a rich dad.
I think the point is that it wouldn’t. You can have character traits, i.e. conscientiousness, that result in status markers, i.e. having saved a lot of money. If you make it easier for people to get the specific status marker, i.e. welfare, the causal arrow doesn’t go in reverse and increase conscientiousness. You could expect it to have no effect, i.e. if conscientiousness and other traits are innate and entirely determined by age 4. (That’s kind of my default). Or, in a slightly more complicated world where conscientiousness can vary depending on environment , i.e. there are a bunch of causal arrows bouncing around in confusing ways, “diluting” the status marker by making it easier to acquire might reduce the incentive to have the underlying trait, and make people less conscientious over time. I’ve heard the argument that this happens to people on welfare, although I’m tempted to say “correlation not causation”–>who ends up on welfare in the first place already depends on conscientiousness.
At least in the US, saving money can disqualify you from welfare.
When my best friend was on welfare, they would take what she had earned at her part-time job the last month and subtract half that amount from her welfare. So there was still an incentive to work, albeit less. I don’t know to what degree she had to submit her budget or expenses to them (i.e. that they would actually know if she was saving money), but in general they seemed to make it as hard as possible to actually stay on Welfare.
That’s about income, not savings.
I don’t know what the policy was on savings-i.e. to what degree, if at all, they would reduce her monthly amount if she submitted her budget each month and was spending less. I get the impression that it’s kind of a basic fixed rate for, i.e., adult not in school with one child...and that it’s realistically not enough to save, even if you spend nothing on discretionary purchases or fun. She got around $900 a month, of which $550 alone went towards her part of our rent.
If she’d, for example, made $500 per paycheck (25 hours a week at Canadian mininum wage), that would make $1000 a month, so they’d take $500 off her welfare payment, for a monthly total income of $1400...which is enough to save at least a small amount per month, given our shared living expenses. In the US welfare system, would they cancel your welfare if you were able to save $200 a month of this total?
They did keep cancelling the welfare for unrelated reasons. (Example: her parents had had an education fund for her of about $10,000, but they’d spent it all on her wedding, and they sent her a letter saying her welfare was cancelled until she could submit documents proving this. Not a warning-cancelled. She missed a month or two before submitting the documents, and eventually gave up and just worked more hours.)
http://cfed.org/assets/scorecard/2013/rg_AssetLimits_2013.pdf
Short version: it varies quite a bit by state, but some major benefits in a fair number of states have a personal asset limit of two or three thousand dollars.
Thanks! So it looks like there’s a limit but at least someone thinks it’s a bad idea and some states are changing it...
According to this, the asset limit to qualify for Ontario Works (welfare) is $572 for a single adult and $1,550 for a lone parent. So, worse than is the US… (But it was $2500 for a single adult in 1981...) The 50% earning exemption is new from 2003 though.
Wow I have learned things today!
I’ve never provided any information about savings when applying for welfare. What organization has that policy?
See also: Credential Inflation
Fate/stay night
He just needs to get Saber to say it. Saber often tells people, in a bluntly matter-of-fact way, that they’re making a mistake. Rin knows this. If Shiro said it, though, she’d think it was some kind of dominance thing and get mad.
(Maybe I’m over-analyzing this.)
Slightly off-topic, but I keep seeing Fate/Stay night referenced on here, is it particularly ‘rationalist’ or do people just like it as entertainment?
It’s not an especially rational piece of work as such, although it has its moments, but it is one of the more detailed examinations of heroic responsibility and the associated cultural expectations in fiction (if you can get past the sometimes shaky translation). Your mileage might vary, but I see echoes of it whenever Eliezer writes about saving the world.
It has some elements that stand out in terms of rationalist virtue, and many others which don’t.
I found it to be very much a mixed bag, but the things it did well, I thought it did exceptionally well.
It’s not so much rationalist as… Eliezer-ish. See my review in the media thread: http://lesswrong.com/lw/i8c/august_2013_media_thread/9ilm
’Then he posed a question that, obvious as it seems, had not really occurred to me: “What makes you think that UFOs are a scientific problem?”
I replied with something to the effect that a problem was only scientific in the way it was approached, but he would have none of that, and he began lecturing me. First, he said, science had certain rules. For example, it has to assume that the phenomena it is observing is natural in origin rather than artificial and possibly biased. Now the UFO phenomenon could be controlled by alien beings. “If it is,” added the Major, “then the study of it doesn’t belong to science. It belongs to Intelligence.” Meaning counterespionage. And that, he pointed out, was his domain. *
“Now, in the field of counterespionage, the rules are completely different.” He drew a simple diagram in my notebook. “You are a scientist. In science there is no concept of the ‘price’ of information. Suppose I gave you 95 per cent of the data concerning a phenomenon. You’re happy because you know 95 per cent of the phenomenon. Not so in intelligence. If I get 95 per cent of the data, I know that this is the ‘cheap’ part of the information. I still need the other 5 percent, but I will have to pay a much higher price to get it. You see, Hitler had 95 per cent of the information about the landing in Normandy. But he had the wrong 95 percent!”
“Are you saying that the UFO data we us to compile statistics and to find patterns with computers are useless?” I asked. “Might we be spinning our magnetic tapes endlessly discovering spurious laws?”
“It all depends on how the team on the other side thinks. If they know what they’re doing, there will be so many cutouts between you and them that you won’t have the slightest chance of tracing your way to the truth. Not by following up sightings and throwing them into a computer. They will keep feeding you the information they want you to process. What is the only source of data about the UFO phenomenon? It is the UFOs themselves!”
Some things were beginning to make a lot of sense. “If you’re right, what can I do? It seems that research on the phenomenon is hopeless, then. I might as well dump my computer into a river.”
“Not necessarily, but you should try a different approach. First you should work entirely outside of the organized UFO groups; they are infiltrated by the same official agencies they are trying to influence, and they propagate any rumour anyone wants to have circulated. In Intelligence circles, people like that are historical necessities. We call them ‘useful idiots’. When you’ve worked long enough for Uncle Sam, you know he is involved in a lot of strange things. The data these groups get is biased at the source, but they play a useful role.
“Second, you should look for the irrational, the bizarre, the elements that do not fit...Have you ever felt that you were getting close to something that didn’t seem to fit any rational pattern yet gave you a strong impression that it was significant?”′
Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”
Holmes: “To the curious incident of the dog in the night-time.”
Gregory: “The dog did nothing in the night-time.”
Holmes: “That was the curious incident.”
“Silver Blaze” (Sir Arthur Conan Doyle)
If UFOs are controlled by a non-human intelligence, assuming they’ll behave like human schemes is as pointless as assuming they’ll behave like natural phenomena. But of course the premise is false and the Major’s approach is correct.
A creature that can build a spaceship is probably closer to oe that can build a plane than it is to a rock at least, you have to start somewhere.
-Unknown
-- Paraphrase of joke by Marcus Brigstocke
To be fair there are quite a few people who nowadays listen to electronic music, take drugs that are pills and who spend a lot of time in dark rooms.
That’s the joke.
It’s funny, but you really shouldn’t be learning life lessons from Tetris.
If Tetris has taught me anything, it’s the history of the Soviet Union.
We can reformulate Tetris as follows: challenges keep appearing (at a fixed rate), and must be solved at the same rate; we cannot let too many unsolved challenges pile up, or we will be overwhelmed and lose the game.
So Tetris is really an anti-procrastination learning tool? Hmmm, wonder why that doesn’t sound right….
But the challenge rate is not fixed. It increases at higher levels. So the lesson seems rather hollow: At some point, if you are successful at solving challenges, the rate at which new ones appear becomes too high for you.
Just like life. The reward for succeeding at a challenge is always a new, bigger challenge.
At which point you die, for lack of intelligence.
Actually a fairly good metaphor for x-risk, surprisingly.
Of course, it’s a lot easier to make a Tetris-optimizer than a Friendly AI...
I thought Tetris had been proven to always eventually produce an unclearable block sequence.
Only if there is a possibility of a sufficiently large run of S and Z pieces. In many implementations there is not.
It was either that or risk some people playing without stop until their bodies died in the real world.
...thus becoming useful object lessons to the rest of the species, and reducing our average susceptibility to reward systems with low variability. Not quite seeing the problem here.
And todays challenges can be used to remedy yesterdays failures.
How is that a rationality quote?
LF:GFE
I’m afraid I don’t know what that stands for.
Logical Fallacy: Generalization from Fictional Evidence
Actually, it strikes me that this particular example shouldn’t be classified as GFE. “Errors pile up and accomplishments disappear” is a consequence of the way that the game logic works: in a sense, you could say that it’s a theorem implied by the axioms of the game. While it’s valid to say that Tetris is a flawed piece of procedural rhetoric in that its axioms do not correctly describe the real world, if you called it fictional evidence you would also be forced to call math fictional evidence, which probably isn’t what you’d want.
What?
Eh?
Ah, thank you.
Scott Adams
Aka http://demotivators.despair.com/demotivational/stupiditydemotivator.jpg
“Quitters never win, winners never quit, but those who never win AND never quit are idiots”
From the same website, another LessWrongian wisdom:
This is an incredibly important life skill.
David Deutsch, The Beginning of Infinity
Did Karl Popper populate his class with particularly unimaginative students ? If someone asked me to “observe”, I’d fill an entire notebook with observations in less than an hour—and that’s even without getting up from my chair.
And, while you were writing, someone would provide the wanted answer ;)
I’m pretty sure I had this very exercise in a creative-writing class somewhere in school.
That’s an interesting prediction. Have you tried it? Can you predict what you’d do after filling the notebook?
In my imagination, I’d probably wind up in one of two states:
Feeling tricked and asking myself “What was the point of that?”
Feeling accomplished and waiting for the next instruction.
I have never tried it myself in a structured setting, such as a classroom; but I do sometimes notice things, and then ask myself, “What is going on here ? Why does this thing behave in the way that it does ?”. Sometimes I think about it for a while, figure out what sounds like a good answer, then go on with my day. Sometimes I shrug and forget about it. Sometimes—very rarely—I’m interested enough to launch a more thorough investigation. I imagine that if I set myself an actual goal to “observe” stuff, I’d notice a lot more stuff, and spend much more time on investigating it.
You say that, in such a situation, you could end up “feeling tricked”, but this assumes that the teacher who told you to “observe” is being dishonest: he’s not interested in your observations, he’s just interested in pushing his favorite philosophy onto you. This may or may not be the case with Karl Popper, but observations are valuable (and, IMO, fun) regardless.
Hmm, this point seems more Kuhnian than Popperian. Maybe Deutsch got the two confused.
Another view.
Bakemonogatari
In Bakemonogatari, the main characters often encounter spirits that only interact with specific people under specific conditions, although the effects they have are real (and would manifest to another’s eyes as inexplicable paranormal phenomena). As such it’s more a request about shoring up inconsistencies in sense perception, than it is about inconsistencies in belief.
That, and I’m getting the distinct impression their world is a non-euclidean mess.
David Chapman
See also: “Figuring out what should be your top priority” vs. “Actually working on your current best guess”.
The opposite intellectual sin to wanting to derive everything from fundamental physics is holism which makes too much of the fact that everything is ultimately connected to everything else. Sure, but scientific progress is made by finding where the connections are weak enough to allow separate theories.
-- John McCarthy
Rudyard Kipling, The Jungle Book
Peter Greer
The other way to look at that is the other agent doing basic induction.
It is. That doesn’t mean the results are good.
George Bernard Shaw
I agree with the thought, but I find the attribution implausible. “Finding yourself” sounds like modern pop-psych, not a phrase that GBS would ever have written. Google doesn’t turn up a source.
Google nGram suggests that “Finding yourself” wasn’t a phrase that was really in use before the 1960 albeit there a short uptick in 1940. Given that you need some time for criticism and Shaw died in 1950, I think it’s quite clear that this quote is to modern for him. Although maybe post-modern is a more fitting word?
The timeframe seems to correspond with the rise of post-modern thought. If you suddenly start deconstructing everything you need to find yourself again ;)
I think you are right that it is difficult to find the exact source. I came upon this quotation in the book Up where the author quoted Bernard Shaw. Google gave me http://www.goodreads.com/author/quotes/5217.George_Bernard_Shaw, but no article or play was indicated as a source of this quote.
“Life is about creating yourself” still might be problematic because the emphasis is still on what sort of person you are.
As opposed to what? I would guess maybe a better concept is what you’re able to get done...
I think the implied contrast is between “creating yourself” and “what you do” or the less pretty but more precise “doing your actions.” The first implies a smaller, more rigid set than the last, which is perhaps not the correct way to perceive life.
-- GLaDOS from Portal 2
If you cast out all the easy strategies that don’t actually work as non-‘solutions’, then sure, in what remains among the set of solutions, the best is often the easiest, though not easy. I can think of much harder ways to save the world and I’m not trying any of them.
If you define best as easiest.
If best is defined as easiest, then the “usually” within the quote is entirely superfluous. “If” statements are logically exception-less, and the Law of Conserved Conversation (That i’ve just made up) means that “usually” implies exceptions. Otherwise it would be excluded from the quote. So I say, pedantically, “duh. but you’re missing the point a bit, aren’t you mate?”
I like to think of the principle as a kind of Occam’s for action. Don’t take elaborate actions to produce some solution that is otherwise trivially easy to produce.
You may want to read something about pragmatics, starting with e.g. the section on conversational implicatures in Chapter 1 of CGEL.
(Your made-up law sounds related to these.)
Huh. The Maxim of Relation does sound very much like what I was trying to go for.
I see it as more of a “rather than sorting projects by revenue, make sure to sort them by profit,” combined with “in cases where revenue is concave and cost linear, which happen frequently, the lowest cost project is probably going to be the highest profit.”
That plus “beware inflated revenue estimates, especially for have-it-all type plans”. Cost estimates are often much more accurate.
Alternatively, if you define solution such that any two given solutions are equally acceptable with respect to the original problem.
Stephen Jay Gould
A proactive interest in the latter would seem to lead to extensive instrumental interest in the former. Finding things (such as convolutions in brains or genes) that are indicative of potentially valuable talent is the kind of thing that helps make efficient use of it.
There are surprisingly few MRI machines or DNA sequencers in cotton fields and sweatshops. Paraphrasing the original quote from Stephen Jay Gould: The problem is not how good we are at detecting talent; it’s where we even bother to look for it.
You need neither MRI machines nor DNA sequencers to detect intelligence. IQ test perform much better at detecting intelligence.
Yes; at this point with only 3 SNPs linked to intelligence, it’s a joke to say that ‘poor people aren’t being sequenced and this is why we aren’t detecting hidden gems’.
Yes, but that wasn’t the point of my post; I was replying to:
An MRI machine was an example of a device that could detect convolutions ins brains; a DNA sequencer was an example of a device that could detect genes. My point generalized to “it doesn’t matter how good you are at testing for , if you don’t apply the test.” If we look at IQ tests instead, then (again) it doesn’t matter how accurately a properly-administered IQ test detects intelligence, if you don’t bother properly administering IQ tests to people in cotton fields, sweatshops, or other places where you don’t feel like looking because they aren’t “under the lamppost”, as it were.
In a country like China there’s quite a bit of testing in school. I think it’s quite plausible that there are people who went through the Chinese school system working in Chinese sweatshops and cotton fields.
Is there IQ test properly designed and administered, or does the test-as-given have hidden correlations with things other than IQ?
I suspect, actually, that Gould would not view “find the geniuses and get them out of the fields” as a reasonable solution to the problem he poses. What he wants is for there to be no stoop labour in the first place, whether for geniuses or the terminally mediocre. The geniuses are just a way to illustrate the problem.
That’s a hard problem, with no reasonable way to measure it in in a large population in sight, or even direction of the relationship taken into account. Ideally you’d take a bunch of kids and look at their brains and then see how they grew up and see whether you could find anything that altered the distribution in similar cases—but ….
Well, you see the problem? It’s a sort of twiddling your thumbs style studying, rather than addressing more immediate problems that might do something at a reasonable price/timeline.
There was only one Ramanujan; and we are all well-aware of Gould’s views on intelligence here, I presume.
they are not well known to me
http://lesswrong.com/lw/65b/scientific_misconduct_misdiagnosed_because_of/
http://lesswrong.com/lw/kv/beware_of_stephen_j_gould/
http://www.debunker.com/texts/jensen.html
Thanks
In what reference class?
I chose Ramanujan as my example because mathematics is extremely meritocratic, as proven by how he went from poor/middle-class Indian on the verge of starving to England on the strength of his correspondence & papers. If there really were countless such people, we would see many many examples of starving farmers banging out some impressive proofs and achieving levels of fame somewhat comparable to Einstein; hence the reference class of peasant-Einsteins must be very small since we see so few people using sheer brainpower to become famous like Ramanujan.
(Or we could simply point out that with average IQs in the 70s and 80s, average mathematician IQs closer to 140s—or 4 standard deviations away, even in a population of billions we still would only expect a small handful of Ramanujans—consistent with the evidence. Gould, of course, being a Marxist who denies any intelligence, would not agree.)
It is worth pointing out that Ramanujan, while poor, was still a Brahmin.
And not just that, but he had more education than the poorest Indians, and probably more than the second poorest. And got his hands on a math textbook, which was probably pretty low probability.
My bet is that there aren’t a lot of geniuses doing stoop labor, especially in traditional peasant situations, but there are some who would have been geniuses if they’d had enough food when young and some education.
Even the poorest Indians (or Chinese, for that matter) will sacrifice to put their children through school. Ramanujan’s initial education does not seem to have been too extraordinary, before his gifts became manifest (he scored first in exams, and that was how he was able to go to a well-regarded high school; pg25).
Actually, we know how he got his initial textbooks, which was in a way which emphasizes his poverty; pg26-27:
So just as well he was being lent and awarded all his books, because certainly at age 11 as a poor Indian it’s hard to see how he could afford expensive rare math or English books...
A rather tautological comment: yes, if we removed all the factors preventing people from being X, then presumably more people would be X...
Is the distribution for mathematicians in general stochastic with respect to IQ and a wealthy upbringing / proximity to cultural centres that reward such learning? That might give you signs of whether wealth / culture is a third correlate.
Otherwise, one way or the other, I’m not sure one person shifts the prob any appreciable distance.
It really depends on what ‘prob’ you’re talking about. For example, the mean of some variable can be shifted an arbitrary amount by a single person if they are arbitrarily large, which is why “robust statistics” shuns the mean in favor of things like the median, and of course a single counter-example disproves a universal claim. When you are talking about lists of geniuses where the relevant group of geniuses might be 10 or 20 people, 1 person may be fairly meaningful because the group is so small.
Being a Brahmin does not put rice on the table. Again, he was on the brink of starving, he says; this screens off any group considerations—we know he was very poor.
It screens off any wealth considerations, with the exception of his education (which is midlly relevant). It has a big impact on the question of average IQ and ancestry, though. Brahmin average IQ is probably north of 100,* and so a first-rank mathematician coming from a Brahmin family of any wealth level is not as surprising as a first-rank mathematician coming from a Dalit family.
So we still need to explain the absence (as far as I know) of first rate Dalit mathematicians. Gould argues that they’re there, and we’re missing them; the hereditarian argues that they’re not there. One way to distinguish between the two is to evaluate the counterfactual statement “if they were there, they wouldn’t be missed,” and while Ramanujan is evidence for that statement it’s weakened because of the potential impact of caste prejudice / barriers.
(It seems like the example of China might be better; it seems that young clever people have had the opportunity to escape sweatshops and cotton fields and enter the imperial service / university system for quite some time. Again, though, this is confounded by Han IQ being probably slightly north of 100, and so may not generalize beyond Northeast Asia and Europe.)
*Unfortunately, there is very little solid research on Indian IQ by caste.
You’d need to examine the IQ of the poorer Brahmins, though, before you could say it’s not surprising; otherwise if the poor Brahmins have the same IQs as equally poor Dalits, then it ought to be equally surprising.
But Ramanujan is evidence against the Great Filters of nationality and poverty, which ought to be much bigger filters against possible Einsteins than caste.
Yes, but I’m not very familiar with the background of major Chinese figures (eg. I just looked him up now and while I had assumed Confucius was a minor aristocrat, apparently he was actually the son of an army officer and “is said to have worked as a shepherd, cowherd, clerk, and a book-keeper.”); plus, you’d want to look at the post-Tang major Chinese figures, but that will exclude most major Chinese figures period like all the major philosophers—looking up the Chinese philosophy table in Murray’s Human Accomplishment, like the first 10 are all pre-examination (and Murray comments of one of them, ” it was Zhu Xi who was responsible for making Mencius as well known as he is today, by including Mencius’s work as part of “The Four Books” that became the central texts for both primary education and the civil service examinations”).
He’s literally as much evidence against those filters as he is evidence against hypothetical very low prevalence of poor innate geniuses.
I think it can be illustrative, as a counter to the spotlight effect, to look at the personalities of math/science outliers who come from privileged backgrounds, and imagine them being born into poverty. Oppenheimer’s conjugate was jailed or executed for attempted murder, instead of being threatened with academic probation. Gödel’s conjugate added a postscript to his proof warning that the British Royal Family were possible Nazi collaborators, which got it binned, which convinced him that all British mathematicians were in on the conspiracy. Newton and Turing’s conjugates were murdered as teenagers on suspicion of homosexuality. I have to make these stories up because if you’re poor and at all weird, flawed, or unlucky your story is rarely recorded.
A gross exaggeration; execution was never in the cards for a poisoned apple which was never eaten.
Likewise. Goedel didn’t go crazy until long after he was famous, and so your conjugate is in no way showing ‘privilege’.
Likewise. You have some strange Whiggish conception of history where all periods were ones where gays would be lynched; Turing would not have been lynched anymore than President Buchanan would have, because so many upper-class Englishmen were notorious practicing gays and their boarding schools Sodoms and Gomorrahs. To remember the context of Turing’s homosexuality conviction, this was in the same period where highly-placed gay Englishman after gay Englishman was turning out to be Soviet moles (see the Cambridge Five and how the bisexual Kim Philby nearly became head of MI6!) EDIT: pg137-144 of the Ramanujan book I’ve been quoting discusses the extensive homosexuality at Cambridge and its elite, and how tolerance of homosexuality ebbed and flowed, with the close of the Victorian age being particularly intolerant.
The right conjugate for Newton, by the way, reads ‘and his heretical Christian views were discovered, he was fired from Cambridge—like his successor as Lucasian Professor—and died a martyr’.
The problem is, we have these stories. We have Ramanujan who by his own testimony was on the verge of starvation—and if that is not poor, then you are not using the word as I understand it—and we have William Shakespeare (no aristocrat he), and we have Epicurus who was a slave. There is no censorship of poor and middle-class Einsteins. And this is exactly what we would expect when we consider what it takes to be a genius like Einstein, to be gifted in multiple ways, to be far out on multiple distributions (giving us a highly skewed distribution of accomplishment, see the Lotka curve): we would expect a handful of outliers who come from populations with low means, and otherwise our lists to be dominated by outliers from populations with higher means, without any appeal to Marxian oppression or discrimination necessary.
Do you really think the existence of oppression is a figment of Marxist ideology? If being poor didn’t make it harder to become a famous mathematician given innate ability, I’m not sure “poverty” would be a coherent concept. If you’re poor, you don’t just have to be far out on multiple distributions, you also have to be at the mean or above in several more (health, willpower, various kinds of luck). Ramanujan barely made it over the finish line before dying of malnutrition.
Even if the mean mathematical ability in Indians were innately low (I’m quite skeptical there), that would itself imply a context containing more censoring factors for any potential Einsteins...to become a mathematician, you have to, at minimum, be aware that higher math exists, that you’re unusually good at it by world standards, and being a mathematician at that level is a viable way to support your family.
On your specific objections to my conjugates...I’m fairly confident that confessing to poisoning someone else’s food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas), and is at least a career-limiting move if you don’t start from a privileged position. Hardly a gross exaggeration. Goedel didn’t become clinically paranoid until later, but he was always the sort of person who would thoughtlessly insult an important gatekeeper’s government, which is part of what I was getting at; Ramanujan was more politic than your average mathematician. I actually was thinking of making Newton’s conjugate be into Hindu mysticism instead of Christian but that seemed too elaborate.
I’m perfectly happy to accept the existence of oppression, but I see no need to make up ways in which the oppression might be even more awful than one had previously thought. Isn’t it enough that peasants live shorter lives, are deprived of stuff, can be abused by the wealthy, etc? Why do we need to make up additional ways in which they might be opppressed? Gould comes off here as engaging in a horns effect: not only is oppression bad in the obvious concrete well-verified ways, it’s the Worst Thing In The World and so it’s also oppressing Einsteins!
Not what Gould hyperbolically claimed. He didn’t say that ‘at the margin, there may be someone who was slightly better than your average mathematician but who failed to get tenure thanks to some lingering disadvantages from his childhood’. He claimed that there were outright historic geniuses laboring in the fields. I regard this as completely ludicrous due both to the effects of poverty & oppression on means & tails and due to the pretty effective meritocratic mechanisms in even a backwater like India.
It absolutely is. Don’t confuse the fact that there are quite a few brilliant Indians in absolute numbers with a statement about the mean—with a population of ~1.3 billion people, that’s just proving the point.
The talent can manifest as early as arithmetic, which is taught to a great many poor people, I am given to understand.
Really? Then I’m sure you could name three examples.
Sorry, I can only read what you wrote. If you meant he lacked tact, you shouldn’t have brought up insanity.
Really? Because his mathematician peers were completely exasperated at him. What, exactly, was he politic about?
Wait, what are you saying here? That there aren’t any Einsteins in sweatshops in part because their innate mathematical ability got stunted by malnutrition and lack of education? That seems like basically conceding the point, unless we’re arguing about whether there should be a program to give a battery of genius tests to every poor adult in India.
Not all of them, I don’t think. And then you have to have a talent that manifests early, have someone in your community who knows that a kid with a talent for arithmetic might have a talent for higher math, knows that a talent for higher math can lead to a way to support your family, expects that you’ll be given a chance to prove yourself, gives a shit, has a way of getting you tested...
Just going off Google, here: People being incarcerated for unsuccessful attempts to poison someone: http://digitaljournal.com/article/346684 http://charlotte.news14.com/content/headlines/628564/teen-arrested-for-trying-to-poison-mother-s-coffee/ http://www.ksl.com/?nid=148&sid=85968
Person being killed for suspected unsuccessful attempt to poison someone: http://zeenews.india.com/news/bihar/man-lynched-for-trying-to-poison-hand-pump_869197.html
I was trying to elegantly combine the Incident with the Debilitating Paranoia and the Incident with the Telling The Citizenship Judge That Nazis Could Easily Take Over The United States. Clearly didn’t completely come across.
He was politic enough to overcome Vast Cultural Differences enough to get somewhat integrated into an insular community. I hang out with mathematicians a lot; my stereotype of them is that they tend not to be good at that.
And this part seems entirely plausible. American slaves had no opportunity to become famous mathematicians unless they escaped, or chanced to have an implausibly benevolent Dumbledore of an owner.
Gould makes a much stronger claim, and I attach little probability to the part about the present day. But even there, you’re ignoring one or two good points about the actions of famous mathematicians. Demanding citations for ‘trying to kill people can ruin your life’ seems frankly bizarre.
The specific oppressions you led off with: yes.
I thought we were talking about Oppenheimer and Cambridge? It looks like if Oppenheimer hadn’t had rich parents who lobbied on his behalf, he might have gotten probation instead of not. Given his instability, that might have pushed him into a self-destructive spiral, or maybe he just would have progressed a little slower through the system. So, yes, jumping from “the university is unhappy” to “the state hangs you” is a gross exaggeration. (Universities are used to graduate students being under a ton of stress, and so do cut them slack; the response to Oppenheimer of “we think you need to go on vacation, for everyone’s safety” was ‘normal’.)
I’m sorry, I never really rigorously defined the counter-factuals we were playing with, but the fact that Oppenheimer was in a context where attempted murder didn’t sink his career is surely relevant to the overall question of whether there are Einsteins in sweatshops.
I don’t see the relevance, because to me “Einsteins in sweatshops” means “Einsteins that don’t make it to ”, for some Cambridge equivalent. If Ramanujan had died three years earlier, and thus not completed his PhD, he would still be in the history books. I mean, take Galois as an example: repeatedly imprisoned for political radicalism under a monarchy, and dies in a duel at age 20. Certainly someone ruined by circumstances—and yet we still know about him and his mathematical work.
In general, these counterfactuals are useful for exhibiting your theory but not proving your theory. Either we have the same background assumptions- and so the counterfactuals look reasonable to both of us- or we disagree on background assumptions, and the counterfactual is only weakly useful at identifying where the disagreement is.
I don’t think Epicurus was a slave. He did admit slaves to his school though, which is not something that was typical for his time. Perhaps you are referring to the Stoic, Epictetus, who definitely was a slave (although, white-collar).
Whups, you’re right. Some of the Greek philosophers’ names are so easy to confuse (I still confuse Xenophanes and Xenophon). Well, Epictetus was still important, if not as important as Epicurus.
I think a better term might be ‘meritocratic’, and not ‘democratic’. Unless mathematicians vote on mathematics?
Well, it is also democratic in the sense that what convinces the mathematical community is what matters, and there’s no ‘President of Mathematics’ or ‘Academie de la Mathematique’ laying down the rules, but yes, ‘meritocratic’ is closer to what I meant.
Well, “democratic” strongly suggests a majority vote, and it’s not like something that convinces 54% of the mathematicians who read it ‘wins’.
pg169-171, Kanigel’s 1991 The Man Who Knew Infinity:
Personally, having finished reading the book, I think Kanigel is wrong to think there is so much contingency here. He paints a vivid picture of why Ramanujan had failed out of school, lost his scholarships, and had difficulties publishing, and why two Cambridge mathematicians might mostly ignore his letter: Ramanujan’s stubborn refusal to study non-mathematical topics and refusal to provide reasonably rigorous proofs. His life could have been much easier if he had been less eccentric and prideful. That despite all his self-inflicted problems he was brought to Cambridge anyway is a testimony to how talent will out.
I haven’t heard that before. Do you have a source?
From his letter to G.H. Hardy:
Googling the text finds it quoted a bunch of places.
Wow, thanks!
Besides his letter to Hardy, Wikipedia cites The Man Who Knew Infinity (on Libgen; it also quotes the ‘half starving’ passage), where the cited section reads:
I can’t parse ’271″ feet’, is this an OCR issue? If you loosen the belt by two yards, it can obviously reach at least a yard above the surface, because you can just go from ____ to __|__. And I recall that the actual answer is considerably more than that.
Given that the symbol ” is the symbol for inches, and ′ is the symbol for feet, I would suspect that there has been a mistyping in the quote.
I think that what was meant to be there was 72“ or 72.1” (inches), which is exactly/one-tenth of an inch over two yards (one yard = three feet). That would produce the desired result of a nearly one-foot increase in the radius of the belt; adding 72 inches to the circumference of the belt would produce an increase of 11.46 inches (72 inches / (2 * pi)) in the radius of the belt, which in this case is the height above the ground.
Was extremely democratic. Do we know this is still true?
“The Collapse of the Soviet Union and the Productivity of American Mathematicians” comes to mind as an interesting recent natural experiment where the floodgate of Russian mathematical talent was unleashed after the collapse of the USSR and many of them successfully rose in America despite academic math being a zero-sum game; consistent with meritocracy.
At the outlier level, I think so—see e.g. Perelman. At the normal professor-of-mathematics level, probably not.
Okay, maybe there aren’t other examples quite as good as him, but a few of these people surely come close.
Yes, but I’m not sure all of the populations working in cotton fields and sweatshops had such a low average IQ. (And Gould just said “people”, not “innumerable people” or something like that.)
Most of those people either seem to come from middle-class or better backgrounds, fall well below Einstein, or both (I mean, Eliezer Yudkowsky?)
Doesn’t your observation that most successful autodidacts come from financially stable backgrounds SUPPORT the hypothesis that intelligent individuals from low-income backgrounds are prevented from becoming successful?
With the facts you’ve highlighted, two conclusions may be drawn: either most poor people are stupid, or the aforementioned “starving farmers” don’t have the time or the resources to educate themselves or “[bang] out some impressive proofs,” on account of the whole “I’m starving and need to grow some food” thing. I don’t see how such people would be able to afford books to learn from or time to spend reading them.
No, it doesn’t; see my other comment. I was criticizing the list as a bizarre selection which did not include anyone remotely like Einstein.
How did Ramanujan afford books?
The answer to the autodidact point is to point out that once one has proven one’s Einstein-level talent, one is integrated into the meritocratic system and no longer considered an autodidact.
Did you mean innumerate people?
I meant ‘lots of people’, not ‘people who cannot do arithmetic’. looks word up EDIT: Huh, looks like that was the right word after all.
Sorry, then. Your phrasing sounded wrong to me, but I was wrong.
Will you update your post after looking the word up confirms that it means what you thought it did?
I was going to but I forgot to. Thank you.
Isn’t the average IQ 100 by definition?
Yes—but whose average?
Presumably the people who write the IQ test, based on whatever population sample they use to calibrate it. Is the point that the average IQ in India is 70-80, as opposed to the average in the US? (This could be technically true on an IQ test written in the US, without being meaningful, or it could be actually true because of nutrition or whatever). What data does the number 70-80 actually come from?
Presumably from this list.
It would naively seem that an IQ of 160 or more is 5 SDs from 85 , but 4SDs from the 100 , so the rarity would be 1⁄3,483,046 vs 1⁄31,560 , for a huge ratio of 110 times prevalence of extreme genius between the populations.
Except that this is not how it works when the IQ of 100 population has been selected from the other and subsequently has lower variance. Nor is it how Flynn effect worked. Because, of course, the standard deviation is not going to remain constant.
You presume too much, the only thing I remember about Gould’s views is that they are controversial.
Kostas Kiriakakis, A Day at the Park.
This is good, although when I read the comic I find myself interpreting Eye as valuing curiosity for curiosity’s sake alone,in direct opposition to valuing truth, which I can’t really get behind and leads to me siding with the old man.
Peter Greer
Rewarding those who tell great stories is hardly limited to non-profits. Hollywood of course does this as well it should. Fund raising for new ventures does this a lot, raising money for many sorts of investment at the retail level is largely an effort of telling good stories not particularly supported by statistical fact.
Which isn’t to say that this is not a problem for non-profits, but rather that non-profits might do well to see how other industries deal with this phenomenon.
At least in investing the people listening to the stories eventually find out whether their investment went sour.
The problem is doubtless exacerbated when those paying for the service and those receiving it live in different time periods.
-- Tillaume, The Alloy of Law
— Robert Fripp
George Monbiot, Introduction: On Trying to be Less Wrong.
Not true. Trivially, if A is definitively wrong, then ~A is definitively right. Popperian falsification is trumped by Bayes’ Theorem.
Note: This means that you cannot be definitively wrong, not that you can be definitively right.
True, but possibly dangerously close to “There is no virtue in following other people or in cultivating followers”.
David Griffiths, Introduction to Quantum Mechanics. The book is not so good but I liked this quote.
I haven’t studied quantum mechanics in any depth at all. The meaning I, as a layman, derive from this statement is: in the formal QM system a particle has no property labelled “position”. There is perhaps an emergent property called position, but it is not fundamental and is not always well defined, just like there are no ice-cream atoms. Is this wrong?
Yes, it’s wrong. In the QM formalism position is a fundamental property. However, the way physical properties work is very different from classical mechanics (CM). In CM, a property is basically a function that maps physical states to real numbers. So the x-component of momentum, for instance, is a function that takes a state as input and spits out a number as output, and that number is the value of the property for that state. Same state, same number, always. This is what it means for a property to have a well-defined value for every state.
In QM, physical properties are more complicated—they’re linear operators, if you want a mathematically exact treatment. But here’s an attempt at an intuitive explanation: There are some special quantum states (called eigenstates) for which physical properties behave pretty much like they do in CM. If the particle is in one of those states, then the property takes the state as input and basically just spits out a number. Whenever the particle is in that state, you get the same number. For those states, the property does have a well-defined value.
But the problem in QM is that those are not the only states there are. There are other states as well. These states are linear combinations of the eigenstates, i.e. they correspond to sums of eigenstates (states in QM are basically just vectors, so you can sum them together). These linear combinations are not themselves eigenstates. When you input them into the property, it spits out multiple numbers, not just one. In fact it spits out all the numbers corresponding to each of the eigenstates that are summed together to form our linear combination state. So if A and B are eigenstates for which the property in question spits out numbers a and b respectively, then for the combined state A + B, the property will spit out both a and b—two numbers, not just one.
So the property isn’t just a simple function from states to numbers; for some states you end up with more than one number. And which of those numbers do you see when you make a measurement? Well, that depends on your interpretation. In collapse theories, for instance, you see one of the numbers chosen at random. In MWI, the world branches and each one of those numbers is seen on a separate branch. So there’s the sense in which properties aren’t well-defined in QM—properties don’t associate a unique number with every physical state. This is all pretty hand-wavey, I realize, but Griffiths is right. If you really want an understanding of what’s going on, then you need to study QM in some depth.
Also, I should say that in MWI there is something to your claim that the position of a particle is emergent and not fundamental, but this is not so much because of the nature of the property. It’s because particles themselves are emergent and non-fundamental in MWI. The universal wavefunction is fundamental.
Thanks for the detailed explanation! Now I have more fun words to remember without actually understanding :-)
Seriously, thanks for taking the time to explain that.
-Steven Spielberg
Dollars are floppy. It’s nice to have a relatively rigid bookmark. I’ve used tissues and such as bookmarks in the past but they’re unsatisfactory. Of course, that was back when I still read books in dead tree format.
I’m reminded of a picture I saw on Facebook of a doorstop still in its original packaging used as a doorstop.
My bookmark is prettier than the dollar.
But when it’s being used, you don’t see it!
My bookmark is made of two prices of fridge-magnet material. It can be closed around a few pages and the magnetism holds it in place, preventing it from falling out.
Plus dollars in my country are exclusively coins, the smallest note is $5.
-Abstract, Material priming: The influence of mundane physical objects on situational construal and competitive behavioral choice (via Yvain)
It will fall out. Apart from that, money isn’t particularly clean and (especially if considering US currency) not particularly pretty either. I expect people to find a bookmark far more aesthetically pleasing than a note.
How is this a rationality quote? It is rationality-neutral at best.
“Because the dollar is dirty” is one of those pained, stretched explanations people come up with to explain why they do what they do, not the actual reason (even in some small part) the bookmark was invented and became popular.
The question wasn’t “Why was the bookmark invented?”. If it was, I might have, for example, tried to determine the first time someone used a bookmark (or when it became popular). Then I could have told you precisely how many dollars in present value that dollar would have been worth. That is, moving the goalposts in this way has made your quote worse, not better.
Not even is some small part? That’s absurd. Can you not empathise in even a small part with the aesthetic aversion many people have to contaminating things with used currency?
Are you sure you didn’t just go ahead and basically make up these people who don’t want money to touch their book because it’s dirty?
No. I’ve seen such people. When I look at the mirror, for example. Notice that the standard was explicitly set to:
The observation that this kind of absurd claim is positively received and even supported by similarly ridiculous petty sniping is disheartening.
I’ve known at least a couple people who found it yucky to handle cash right before a meal for that same reason.
I definitely wash my hands after handling money and before eating.
The answer may very well be, “because I find this bookmark that I bought at a dollar store a lot more aesthetically pleasing than the raw dollar bill”.
You may as well ask, “Why spend $20 on a book ? Why not just save the $20 ?”
I get all kinds of entertainment out of reading a $20 bill.
Arr.
I do neither. I use any piece of sufficiently stiff paper I happen to have around (bookmarks purchased by someone else, playing cards, used train tickets, whatever).
I tear out a blank page from the nearest notebook of sufficient size, and fold it as necessary.
Or just fold the corner of the page over.
While I respect your right to do so, I find such a concept aesthetically horrifying.
I never understood that… I remember when I was in elementary school there was a sign in the library that said something like “Don’t dog-ear your books… you wouldn’t like it if someone folded your ear over, so don’t do it to your book.” What?
That’s not particularly uncomfortable.
You’re suffering from the typical ear fallacy. Some people have much stiffer cartilage, or something; I don’t find it uncomfortable, but I’ve met people who’re caused actual pain by it.
With library books, I think the concern is more about wear-and-tear on shared property. Some of us leakily generalize this to “folding page corners is bad”, even for non-shared books. When it’s your own book, you can do whatever you want.
Personally I find folded page corners less effective than bookmarks for quickly finding my place, especially if I’ve folded many other page corners, which makes the currently-folded one less visually obvious. But perhaps I’d learn to be better at that if I used it regularly.
It’s a permanent mark that easily leads to tearing.
I made one when I was bored, long ago when my grandmother still ran her store and my uncle still ran his immigration law firm on the third floor, and when I was obsessed with knot theory, out of computer paper, tape, and a lot of hard pencil. I still use it, and it cost me next to nothing.
EDIT: If requested (however unlikely) I will happily deliver a picture, and either a push or a bouillon cube (your choice). EDIT THE SECOND: it was requested! http://imgur.com/a/kxanI
Yes please! :-)
Done! Do you want a bouillon cube or a push? Think wisely.
What kind of push?
This kind!
I feel like I want the last few minutes of my life back.
That leaves a permanent crease, which I dislike. (Likewise, I prefer to use pencils—preferably soft pencils—rather than pens to take notes.)
It would seem that most of the responders are hopelessly literal....
I find it hard to come up with a deeper meaning for the original statement, so yeah.
Besides, it’s not hard to come up with a deeper meaning behind what the responders are saying; in pointing out that an object specifically designed as a bookmark makes a better bookmark than a dollar bill, they’re making a statement about more than just dollar bills and bookmarks, but about specialization in general.
“We don’t automatically reflect on most things we do, even when spending money. Even lifelong practices can be shown as absurd with a moment’s consideration from the right angle. In fact, we’re so irrational that we’ll pay a dollar for a bookmark!”
A decision with an aesthetic benefit is not irrational. You are misusing “irrational”.
(Or was this sarcasm?)
Reworded so people don’t get caught up in that particular phrasing. (Also, please read the comment tree and note that I’m just trying to answer Jiro’s implied question.)
I don’t see why everyone is disagreeing with you. I definitely notice that people have a tendency to buy things labeled for some sort of purpose, where if they thought for a few minutes they could find a way to fulfill that same purpose without spending money. Unfortunately, I can’t think of any examples off the top of my head.
That’s clearly the intent—except maybe for that last bit—but it’s kinda a poor example, I have to admit.
Reworded so people don’t get caught up in that particular phrasing. (Also, please read the comment tree and note that I’m just trying to answer Jiro’s implied question.)
Your quote is both literally and connotatively poor. If Spielberg had asked “Why spend two dollars on a bookmark? … Why not use a dollar as a bookmark?” then there would at least have been some moral along the lines of efficient practicality. Even then it would be borderline.
A dollar is much more fungible than a bookmark. After you’re done reading your book, you can not only use the dollar to hold your place in other books, you can spend it on other things.
It is indeed a considerably more fungible one dollar.
It takes time and effort (admittedly not much of it, but usually even little of it makes a difference psychologically) to spend $1 on a bookmark. (I would have phrased it as “Why bother spending …”.)
Why use a bookmark that’s worth a whole dollar? I use scrap paper, or a sticky note if falling out is a risk (it almost always isn’t.)
-Robert Downey Jr.
I think it’s good to be well-calibrated.
It is usually best to be socially confident while making well-calibrated predictions of success. The two are only slightly related and Downey is definitely talking about the social kind of confidence.
Good point. I’m still not sure I like his framing of social interactions as getting people on “your” team (which I may be partly biased in by the source of the quote), but the objection in my initial post isn’t a good one.
I think it’s best to be well-calibrated, use that to choose your team as one that’s going to succeed, and then to be confident.
Maybe I’m misunderstanding the quote, but this seems to wither if you have something to protect. If I’m having surgery, I don’t really want the team of expert surgeons listening to my suggestions. I shouldn’t be on my team because I’m not qualified. Highly qualified people should be so that my team will win (and I get to live).
Well, I think the thrust of the quote had more to do with being confident in your own projects. But I’ll try to do an answer to your point because I think it’s important to recognise the limitations of domain specialists—some of whom just aren’t very good at their jobs.
If you’re not on your team of expert surgeons, you’re gonna be screwed if they’re not actually as expert as you might think they were. There’s a bit in What Do You Care What Other People Think? Where Feynman is talking about his first wife’s hospitalisation—and how he had done some reading around the area and come up with the idea that it might be TB—and didn’t push for the idea because he thought that the doctors knew what they were doing.
[Feynman moves onto less likely possibilities]
[Gets convinced to lie to her that it’s Hodgkins—lie falls through]
=====================
Point being, disinvolving yourself from decisions is not a no-risk choice, and specialists aren’t necessarily wise just because they’ve sat through the classes and crammed some sort of knowledge into their heads to get a degree. Assigning trust is a difficult subject.
There’s a book called The Speed of Trust—and that’s pretty much what you give up in being involved in complex decisions where you’re not a specialist and where the specialists are actually really good at their jobs—a bit of speed.
Expert surgeons tend to think that more problems should be solved via surgery than doctors who aren’t surgeons. Before getting surgery you should always talk with a doctor who knows something about the kind of illness you are having who isn’t a surgeon.
After the operation is done doctors will ask you if everything is alright with you. If you try to understand what the operation involved you will give your doctor answers that are likely to be more informative than if you just try to place all responsibility onto another person.
Especially if you feel something that’s not normal for the type of operation that you get, it important to be confident that you perceive something that’s worth bringing to the attention of your doctor.
Having had big operations (one with 8 weeks of hospitalisation and one with 3 weeks) myself I think not taking enough for myself in those context was one of the worst decisions I made in my life. But then I was young and stupid about how the world works at the time.
Only if you’re not the one with the responsibility to do something to protect it. I don’t know the context of the quote, other than apparently being from an interview (with the actor, not any character he has played), but I read it as being about your own efforts to accomplish something. In such matters, you are the first person on your team, and you won’t get any others on board by telling them you’re not sure this is a good idea. Once you’ve made the decision that you are going to go for it, you have to then go for it, not sit around wondering if it’s the right decision. If you’re not acting on a decision, you didn’t make it.
That may be a better wording of what I was trying to say here.
This works as a rationalization growing from the conclusion that others should be “on your team”. If on well-calibrated assessment you yourself are not “on your team”, others probably shouldn’t be either, in which case projecting confidence amounts to deceit.
(Unless I don’t understand what you are saying) I reject whatever definition ‘deceit’ is given such that the above claim is true. Behaving in a socially confident manner is different in nature to lying.
I was using “confidence” in a more specific sense, as in “overconfidence”, that is implying that you know what you are doing, in the case where you actually don’t. “Socially confident manner” might in contrast (for example, among many other things) involve willingness to state your state of uncertainty, as opposed to hiding it (including behind overconfidence).
This seems reasonable. Misleading about probabilities is deceptive. To be fair on Robert Downey, it doesn’t seem likely that that is the the usage he was making in the quote.
Jehovah’s Witnesses (or insert your cult of choice) who secretly don’t believe in what they’re selling, army recruiters who have secretly come to know and reject the horrors of war, insurance salesmen who sell useless policies:
All these (and many others) can be deceitful even without telling you their respective lies explicitly, just by using their social capital / community standing / aura of authority to signal their allegiance to their tribe, lending it credence in a deceitful (dishonest because not in tune with their well-calibrated assessment) manner. The similarity to lying comes from social cues (such as exuding confidence in one’s role) and ‘explicit’ lies being forms of communication both.
It is possible to deceive others while using social confidence signals. Such signals are instrumentally useful for even vital for this and many other purposes. But this is not the same thing as the confidence being deceitful.
A somewhat similar sentiment: http://lesswrong.com/lw/2o3/rationality_quotes_september_2010/2kol
Why shouldn’t they be? The idea that if you don’t rate yourself highly no one should is just an excuse for shitty instincts.
Obviously it’s a useful piece of nonsense to tell yourself. People are more likely to come to your side if you are confident. But the explicit reasoning is reprehensible. (not that any explicit reasoning probably went in, it’s such a common idea that it is repeated without thought. It’s almost a universal applause light.)
This is more of an irrationality quote. A bit of of paper thin justification for a shitty but common sentiment which it’s useful to adopt rather than notice.
-Thomas Jefferson
One who possesses a maximum-entropy prior is further from the truth than one who possesses an inductive prior riddled with many specific falsehoods and errors. Or more to the point, someone who endorses knowing nothing as a desirable state for fear of accepting falsehoods is further from the truth than somebody who believes many things, some of them false, but tries to pay attention and go on learning.
How about “If you know nothing and are willing to learn, you’re closer to the truth than someone who’s attached to falsehoods”? Even then, I suppose you’d need to throw in something about the speed of learning.
It would seem that the difference of opinion here originates in the definition of further. Someone who knows nothing is further (in the information-theoretic sense) from the truth than someone who believes a falsehood, assuming that the falsehood has at least some basis in reality (even if only an accidental relation), because they must flip more bits of their belief (or lack thereof) to arrive at something resembling truth. On the other hand, in the limited, human, psychological sense, they are closer, because they have no attachments to relinquish, and they will not object to having their state of ignorance lifted from them, as one who believes in falsehoods might object to having their state of delusion destroyed.
Right, I’d take it as a statement on how humans actually think, not how a perfect rationalist thinks. Or maybe how most humans think since humans can be unattached to their beliefs.
To me “filled with falsehoods and errors” translates into more falsehoods than “some”. Though I agree its not a very good quote within the context of LW.
-LessWrong Community
Maybe it’s just where my mind was when I read it but I interpreted the quote as meaning something more like:
“It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.”
In what units does one measure distance from the truth, and in what manner?
Bits of Shannon entropy.
That’s half of the answer. In what manner does one measure the number of bits of Shannon entropy that a person has?
If you make a numerical statement of your confidence -- P(A) = X, 0 < X < 1 -- measuring the shannon entropy of that belief is a simple matter of observing the outcome and taking the binary logarithm of your prediction or the converse of it, depending on what came true. S is shannon entropy: If A then S = log2(X), If ¬A then S = log2(1 - X).
The lower the magnitude of the resulting negative real, the better you faired.
That allows a prediction/confidence/belief to be measured. How do you total a person?
Simple, under dubiously ethical and physically possible conditions, you turn their internal world model into a formal bayesian network, and for every possible physical and mathematical observation and outcome, do the above calculation. Sum, print, idle.
It’s impossible in practise, but only like, four line formal definition.
How do you measure someone whose internal world model is not isomorphic to one formal Bayesian network (for example, someone who is completely certain of something)? Should it be the case that someone whose world model contains fewer possible observations has a major advantage in being closer to the truth?
Note also that a perfect Bayesian will score lower than some gamblers using this scheme. Betting everything on black does better than a fair distribution almost half the time.
I am not very certain that humans actually can have an internal belief model that isn’t isomorphic to some bayesian network. Anyone who proclaims to be absolutely certain; I suspect that they are in fact not.
How do you account for people falling prey to things like the conjunction fallacy?
I don’t think people just miscalculate conjunctions. Everyone will tell you that HFFHF is less probable than H, HF, or HFF even. It’s when it gets long and difference is small and the strings are quite specially crafted, errors appear. And with the scenarios, a more detailed scenario looks more plausibly a product of some deliberate reasoning, plus, existence of one detailed scenario is information about existence of other detailed scenarios leading to the same outcome (and it must be made clear in the question that we are not asking about the outcome but about everything happening precisely as scenario specifies it).
On top of that, the meaning of the word “probable” in everyday context is somewhat different—a proper study should ask people to actually make bets. All around it’s not clear why people make this mistake, but it is clear that it is not some fully general failure to account for conjunctions.
edit: actually, just read the wikipedia article on the conjunction fallacy. When asking about “how many people out of 100”, nobody gave a wrong answer. Which immediately implies that the understanding of “probable” has been an issue, or some other cause, but not some general failure to apply conjunctions.
There have been studies that asked people to make bets. Here’s an example. It makes no difference—subjects still arrive at fallacious conclusions. That study also goes some way towards answering your concern about ambiguity in the question. The conjunction fallacy is a pretty robust phenomenon.
I’ve just read the example beyond it’s abstract. Typical psychology: the actual finding was that there were fewer errors with the bet (even though the expected winning was very tiny, and the sample sizes were small so the difference was only marginally significant), and also approximately half of the questions were answered correctly, and the high prevalence of “conjunction fallacy” was attained by considering at least one error over many questions.
How is it a “robust phenomenon” if it is negated by using strings of larger length difference in the head-tail example or by asking people to answer in the N out of 100 format?
I am thinking that people have to learn reasoning to answer questions correctly, including questions about probability, for which the feedback they receive from the world is fairly noisy. And consequently they learn that fairly badly, or mislearn it all-together due to how more detailed accounts are more frequently the correct ones in their “training dataset” (which consists of detailed correct accounts of actual facts and fuzzy speculations).
edit: Let’s say, the notion that people are just generally not accounting for conjunction is sort of like Newtonian mechanics. In a hard science—physics—Newtonian mechanics was done for as a fundamental account of reality once conditions were found where it did not work. Didn’t matter any how “robust” it was. In a soft science—psychology—an approximate notion persists in spite of this, as if it should be decided by some sort of game of tug between experiments in favour and against that notion. If we were doing physics like this, we would never have moved beyond Newtonian mechanics.
Framing the problem in terms of frequencies mitigates a number of probabilistic fallacies, not just the conjunction fallacy. It also mitigates, for instance, base rate neglect. So whatever explanation you have for the difference between the probability and frequency framings shouldn’t rely on peculiarities of the conjunction fallacy case. A plausible hypothesis is that presenting frequency information simply makes algorithmic calculation of the result easier, and so subjects are no longer reliant on fallible heuristics in order to arrive at the conclusion.
The claim of the heuristics and biases program is that the conjunction fallacy is a manifestation of the representativeness heuristic. One does not need to suppose that there is a misunderstanding about the word “probability” involved (if there is, how do you account for the betting experiments?). The difference in the frequency framing is not that it makes it clear what the experimenter means by “probability”, it’s that the ease of algorithmic reasoning in that case reduces reliance on the representativeness heuristic. Further evidence for this is that the fallacy is also mitigated if the question is framed in terms of single-case probabilities, but with a diagram clarifying the relationship between properties in the problem. If the effect were merely due to a misunderstanding about what is meant by “probability”, why would there be a mitigation of the fallacy in this case? Does the diagram somehow make it clear what the experimenter means by “probability”?
In response to your Newtonian physics example, it’s simply not true that scientists abandoned Newtonian mechanics as soon as they found conditions under which it appeared not to work. Rather, they tried to find alternative explanations that preserved Newtonian mechanics, such as positing the existence of Uranus to account for discrepancies in planetary orbits. It was only once there was a better theory available that Newtonian mechanics was abandoned. Is there currently a better account of probabilistic fallacies than that offered by the heuristics and biases program? And do you think that there is anything about the conjunction fallacy research that makes it impossible to fit the effect within the framework of the heuristics and biases program?
I’m not familiar with the effect of variable string length difference, and quick Googling isn’t helping. If you could direct me to some research on this, I’d appreciate it.
There’s only room for making it easier when the word “probable” is not synonymous with “larger N out of 100“. So I maintain that alternate understanding of the word “probable” (and perhaps also an invalid idea of what one should bet on) are relevant. edit: to clarify, I can easily imagine an alternate cultural context where “blerg” is always, universally, invariably, a shorthand for “N out of 100”. In such context, asking about “N out of 100” or about “blerg” should produce nearly identical results.
Also, in your study, about half of the questions were answered correctly.
I guess that’s fair enough, albeit its not clear how that works on Linda-like examples.
In my opinion its just that through their life people are exposed to a training dataset which consists of
Detailed accounts of real events.
Speculative guesses.
and (1) is much more commonly correct than (2) even though (1) is more conjunctive. So people get mis-trained through a biased training set. A very wide class of learning AIs would get mis-trained by this sort of thing too.
The point is that you can’t pull the representativeness trick with e.g. R vs RGGRRGRRRGG . All research I ever seen had strings with small % difference in their length. I am assuming that the research is strongly biased towards researching something un-obvious, while it is fairly obvious that R is more probable than RGGRRGRRRGG and frankly we do not expect to find anyone who thinks that RGGRRGRRRGG is more probable than R.
Maybe a misunderstanding about the word is relevant, but it clearly isn’t entirely responsible for the effect. Like I said, the conjunction fallacy is much less common if the structure of the question is made clear to the subject using a diagram (e.g. if it is made obvious that feminist bank tellers are a proper subset of bank tellers). It seems implausible that providing this extra information will change the subject’s judgment about what the experimenter means by “probable”.
The description given of Linda in the problem statement (outspoken philosophy major, social justice activist) is much more representative of feminist bank tellers than it is of bank tellers.
In the study you quoted, a bit less than half of the answers were wrong, in sharp contrast to the Linda example, where 90% of the answers were wrong. It implies that at least 40% of the failures were a result of misunderstanding. This only leaves 60% for fallacies. Of that 60%, some people have other misunderstandings and other errors of reasoning, and some people are plain stupid (10% are the dumbest people out of 10, i.e. have an IQ of 80 or less), leaving easily less than 50% for the actual conjunction fallacy.
Why so? If the word “probable” is fairly ill defined (as well as the whole concept of probability), then it will or will not acquire specific meaning depending on the context.
Then the representativeness works in the opposite direction from what’s commonly assumed of the dice example.
Speaking of which, “is” is sometimes used to describe traits for identification purposes, e.g. “in general, an alligator is shorter and less aggressive than a crocodile” is more correct than “in general, an alligator is shorter than a crocodile”. If you were to compile traits for finding Linda, you’d pick the most descriptive answer. People know they need to do something with what they are told, they don’t necessarily understand correctly what they need to do.
Poor brain design.
Honestly, I could do way better if you gave me a millenium.
You know, at some point, whoever’s still alive when that becomes not-a-joke needs to actually test this.
Because I’m just curious what a human-designed human would look like.
How likely do you believe it is that there exists a human who is absolutely certain of something?
Is this a testable assertion? How do you determine whether someone is, in fact, absolutely certain?
It’s not unheard of people to bet their life on some belief of theirs.
That doesn’t show that they’re absolutely certain; it just shows that the expected value of the payoff outweighs the chance of them dying.
The real issue with this claim is that people don’t actually model everything using probabilities, nor do they actually use Bayesian belief updating. However, the closest analogue would be people who will not change their beliefs in literally any circumstances, which is clearly false. (Definitely false if you’re considering, e.g. surgery or cosmic rays; almost certainly false if you only include hypotheticals like cult leaders disbanding the cult or personally attacking the individual.)
Is someone absolutely certain if the say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)? It would seem to be a better definition, as it defines probability (and certainty) as a thing in the mind, rather than outside.
In this case, I would see no contradiction as declaring someone to be absolutely certain of their beliefs, though I would say (with non-absolute certainty) that they are incorrect. Someone who believes that the Earth is 6000 years old, for example, may not be swayed by any evidence short of the Christian god coming down and telling them otherwise, an event to which they may assign 0.0 probability (because they believe that it’s impossible for their god to contradict himself, or something like that).
Further, I would exclude methods of changing someone’s mind without using evidence (surgery or cosmic rays). I can’t quite put it into words, but it seems like the fact that it isn’t evidence and instead changes probabilities directly means that it doesn’t so much affect beliefs as it replaces them.
Disagree. This would be a statement about their imagination, not about reality.
Also, people are not well calibrated on this sort of thing. People are especially poorly calibrated on this sort of thing in a social context, where others are considering their beliefs.
ETA: An example: While I haven’t actually done this, I would expect that a significant fraction of religious people would reply to such a question by saying that they would never change their beliefs because of their absolute faith. I can’t be bothered to do enough googling to find a specific interviewee about faith who then became an atheist, but I strongly suspect that some such people actually exist.
Yeah, fair enough.
You are correct. I am making my statements on the basis that probability is in the mind, and as such it is perfectly possible for someone to have a probability which is incorrect. I would distinguish between a belief which it is impossible to disprove, and one which someone believes it is impossible to disprove, and as “absolutely certain” seems to refer to a mental state, I would give it the definition of the latter.
(I suspect that we don’t actually disagree about anything in reality. I further suspect that the phrase I used regarding imagination and reality was misleading; sorry, it’s my standard response to thought experiments based on people’s ability to imagine things.)
I’m not claiming that there is a difference between their stated probabilities and the actual, objective probabilities. I’m claiming that there is a difference between their stated probabilities and the probabilities that they actually hold. The relevant mental states are the implicit probabilities from their internal belief system; while words can be some evidence about this, I highly suspect, for reasons given above, that anybody who claims to be 100% confident of something is simply wrong in mapping their own internal beliefs, which they don’t have explicit access to and aren’t even stored as probabilities (?), over onto explicitly stated probabilities.
Suppose that somebody stated that they cannot imagine any circumstances under which they might change their beliefs. This is a statement about their ability to imagine situations; it is not a proof that no such situation could possibly exist in reality. The fact that it is not is demonstrated by my claim that there are people who did make that statement, but then actually encountered a situation that caused them to change their belief. Clearly, these people’s statement that they were absolutely, 100% confident of their belief was incorrect.
I would still say that while belief-altering experiences are certainly possible, even for people with stated absolute certainty, I am not convinced that they can imagine them occurring with nonzero probability. In fact, if I had absolute certainty about something, I would as a logical consequence be absolutely certain that any disproof of that belief could not occur.
However, it is also not unreasonable that someone does not believe what they profess to believe in some practically testable manner. For example, someone who states that they have absolute certainty that their deity will protect them from harm, but still declines to walk through a fire, would fall into such a category—even if they are not intentionally lying, on some level they are not absolutely certain.
I think that some of our disagreement arises from the fact that I, being relatively uneducated (for this particular community) about Bayesian networks, am not convinced that all human belief systems are isomorphic to one. This is, however, a fault in my own knowledge, and not a strong critique of the assertion.
First, fundamentalism is a matter of theology, not of intensity of faith.
Second, what would these people do if their God appeared before them and flat out told them they’re wrong? :-D
Fixed, thanks.
Their verbal response would be that this would be impossible.
(I agree that such a situation would likely lead to them actually changing their beliefs.)
At which point you can point out to them that God can do WTF He wants and is certainly not limited by ideas of pathetic mortals about what’s impossible and what’s not.
Oh, and step back, exploding heads can be messy :-)
This is not the place to start dissecting theism, but would you be willing to concede the possible existence of people who would simply not be responsive to such arguments? Perhaps they might accuse you of lying and refuse to listen further, or refute you with some biblical verse, or even question your premises.
Of course. Stuffing fingers into your ears and going NA-NA-NA-NA-CAN’T-HEAR-YOU is a rather common debate tactic :-)
Don’t you observe people doing that to reality, rather than updating their beliefs?
That too. Though reality, of course, has ways of making sure its point of view prevails :-)
Reality has shown itself to be fairly ineffective in the short term (all of human history).
8-0
In my experience reality is very very effective. In the long term AND in the short term.
Counterexamples: Religion (Essentially all of them that make claims about reality). Almost every macroeconomic theory. The War on Drugs. Abstinence-based sex education. Political positions too numerous and controversial to call out.
You are confused. I am not saying that false claims about reality cannot persist—I am saying that reality always wins.
When you die you don’t actually go to heaven—that’s Reality 1, Religion 0.
Besides, you need to look a bit more carefully at the motivations of the people involved. The goal of writing macroeconomic papers is not to reflect reality well, it is to produce publications in pursuit of tenure. The goal of the War on Drugs is not to stop drug use, it is to control the population and extract wealth. The goal of abstinence-based sex education is not to reduce pregnancy rates, it is to make certain people feel good about themselves.
Wait, isn’t that pretty much tautological, given the definition of ‘reality’?
What’s your definition of reality?
I can’t get a very general definition while still being useful, but reality is what determines if a belief is true or false.
I thought you were saying that reality has a pattern of convincing people of true beliefs, not that reality is indifferent to belief.
You misunderstood. Reality has the feature of making people face the true consequences of their actions regardless of their beliefs. That’s why reality always wins.
Most of my definition of ‘true consequences’ matches my definition of ‘reality’.
Sort of. Particularly in the case of belief in an afterlife, there isn’t a person still around to face the true consequences of their actions. And even in less extreme examples, people can still convince themselves that the true consequences of their actions are different—or have a different meaning—from what they really are.
In those cases reality can take more drastic measures.
Edit: Here is the quote I should have linked to.
Believing that 2 + 2 = 5 will most likely cause one to fail to build a successful airplane, but that does not prohibit one from believing that one’s own arithmetic is perfect, and that the incompetence of others, the impossibility of flight, or the condemnation of an airplane-hating god is responsible for the failure.
See my edit. Basically, the enemy airplanes flying overhead and dropping bombs should convince you that flight is indeed possible. Also any remaining desire you have it invent excuses will go away once one of the bombs explodes close enough to you.
What’s the goal of rationalism as a movement?
No idea. I don’t even think rationalism is a movement (in the usual sociological meaning). Ask some of the founders.
The founders don’t get to decide whether or not it is a movement, or what goal it does or doesn’t have. It turns out that many founders in this case are also influential agents, but the influential agents I’ve talked to have expressed that they expect the world to be a better place if people generally make better decisions (in cases where objectively better decision-making is a meaningful concept).
Careful, those are the kind of political claims that where there is currently so much mind-kill that I wouldn’t trust much of the “evidence” you’re using to declare them obviously false.
The general claim is one where I think it would be better to test it on historical examples.
So, because Copernicus was eventually vindicated, reality prevails in general? Only a smaller subset of humanity believes in science.
This is not an accurate representation of mainstream theology. Most theologists believe, for example, that it is impossible for God to do evil. See William Lane Craig’s commentary.
First you mean Christian theology, there are lot more theologies around.
Second, I don’t know what is “mainstream” theology—is it the official position of the Roman Catholic Church? Some common elements in Protestant theology? Does anyone care about Orthodox Christians?
Third, the question of limits on Judeo-Christian God is a very very old theological issue which has not been resolved to everyone’s satisfaction and no resolution is expected.
Fourth, William Lane Craig basically evades the problem by defining good as “what God is”. God can still do anything He wants and whatever He does automatically gets defined as “good”.
Clearly they would consider this entity a false God/Satan.
This is starting to veer into free-will territory, but I don’t think God would have much problem convincing these people that He is the Real Deal. Wouldn’t be much of a god otherwise :-)
That’s vacuously true, of course. Which makes you original question meaningless as stated.
It wasn’t so much meaningless as it was rhetorical.
I cannot imagine circumstances under which I would come to believe that the Christian God exists. All of the evidence I can imagine encountering which could push me in that direction if I found it seems even better explained by various deceptive possibilities, e.g. that I’m a simulation or I’ve gone insane or what have you. But I suspect that there is some sequence of experience such that if I had it I would be convinced; it’s just too complicated for me to work out in advance what it would be. Which perhaps means I can imagine it in an abstract, meta sort of way, just not in a concrete way? Am I certain that the Christian God doesn’t exist? I admit that I’m not certain about that (heh!), which is part of the reason I’m curious about your test.
If imagination fails, consult reality for inspiration. You could look into the conversion experiences of materialist, rationalist atheists. John C Wright, for example.
So you’re effectively saying that your prior is zero and will not be budged by ANY evidence.
Hmm… smells of heresy to me… :-D
I would argue that this definition of absolute certainty is completely useless as nothing could possibly satisfy it. It results in an empty set.
If you “cannot imagine under any circumstances” your imagination is deficient.
I am not arguing that it is not an empty set. Consider it akin to the intersection of the set of natural numbers, and the set of infinities; the fact that it is the empty set is meaningful. It means that by following the rules of simple, additive arithmetic, one cannot reach infinity, and if one does reach infinity, that is a good sign of an error somewhere in the calculation.
Similarly, one should not be absolutely certain if they are updating from finite evidence. Barring omniscience (infinite evidence), one cannot become absolutely/infinitely certain.
What definition of absolute certainty would you propose?
So you are proposing a definition that nothing can satisfy. That doesn’t seem like a useful activity. If you want to say that no belief can stand up to the powers of imagination, sure, I’ll agree with you. However if we want to talk about what people call “absolute certainty” it would be nice to have some agreed-on terms to use in discussing it. Saying “oh, there just ain’t no such animal” doesn’t lead anywhere.
As to what I propose, I believe that definitions serve a purpose and the same thing can be defined differently in different contexts. You want a definition of “absolute certainty” for which purpose and in which context?
You are correct, I have contradicted myself. I failed to mention the possibility of people who are not reasoning perfectly, and in fact are not close, to the point where they can mistakenly arrive at absolute certainty. I am not arguing that their certainty is fake—it is a mental state, after all—but rather that it cannot be reached using proper rational thought.
What you have pointed out to me is that absolute certainty is not, in fact, a useful thing. It is the result of a mistake in the reasoning process. An inept mathematician can add together a large but finite series of natural numbers, and then write down “infinity” after the equals sign, and thereafter goes about believing that the sum of a certain series is infinite.
The sum is not, in fact, infinite; no finite set of finite things can add up to an infinity, just as no finite set of finite pieces of evidence can produce absolute, infinitely strong certainty. But if we use some process other than the “correct” one, as the mathematician’s brain has to somehow output “infinity” from the finite inputs it has been given, we can generate absolute certainty from finite evidence—it simply isn’t correct. It doesn’t correspond to something which is either impossible or inevitable in the real world, just as the inept mathematician’s infinity does not correspond to a real infinity. Rather, they both correspond to beliefs about the real world.
While I do not believe that there are any rationally acquired beliefs which can stand up to the powers of imagination (though I am not absolutely certain of this belief), I do believe that irrational beliefs can. See my above description of the hypothetical young-earther; they may be able to conceive of a circumstance which would falsify their belief (i.e. their god telling them that it isn’t so), but they cannot conceive of that circumstance actually occurring (they are absolutely certain that their god does not contradict himself, which may have its roots in other absolutely certain beliefs or may be simply taken as a given).
:-) As in, like, every single human being...
Yep. Provided you limit “proper rational thought” to Bayesian updating of probabilities this is correct. Well, as long your prior isn’t 1, that is.
I’d say that if you don’t require internal consistency from your beliefs then yes, you can have a subjectively certain belief which nothing can shake. If you’re not bothered by contradictions, well then, doublethink is like Barbie—everything is possible with it.
Well, yes.
That is the point.
Nothing is absolutely certain.
Why does a deficient imagination disqualify a brain from being certain?
Vice versa. Deficient imagination allows a brain to be certain.
… ergo there exist human brains that are certain.
if people exist that are absolutely certain of something, I want to believe that they exist.
So… a brain is allowed to be certain because it can’t tell it’s wrong?
Tangent: Does that work?
Nope. “I’m certain that X is true now” is different from “I am certain that X is true and will be true forever and ever”.
I am absolutely certain today is Friday. Ask me tomorrow whether my belief has changed.
In fact, unless you’re insane, you probably already believe that tomorrow will not be Friday!
(That belief is underspecified- “today” is a notion that varies independently, it doesn’t point to a specific date. Today you believe that August 16th, 2013 is a Friday; tomorrow, you will presumably continue to believe that August 16th, 2013 was a Friday.)
Not exactly that but yes, there is the reference issue which makes this example less than totally convincing.
The main point still stands, though—certainty of a belief and its time-invariance are different things.
I very much doubt that you are absolutely certain. There are a number of outlandish but not impossible worlds in which you could believe that it is Friday, yet it might not be Friday; something akin to the world of The Truman Show comes to mind.
Unless you believe that all such alternatives are impossible, in which case you may be absolutely certain, but incorrectly so.
I don’t have to believe that the alternatives are impossible; I just have to be certain that the alternatives are not exemplified.
Define “absolute certainty”.
In the brain-in-the-vat scenario which is not impossible I cannot be certain of anything at all. So what?
So you’re not absolutely certain. The probability you assign to “Today is Friday” is, oh, nine nines, not 1.
Nope. I assign it the probability of 1.
On the other hand, you think I’m mistaken about that.
On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it’s not capable of such granularity. My wetware rounds such numbers and so assigns the probability of 1 to the statement that today is Friday.
So if you went in to work and nobody was there, and your computer says it’s Saturday, and your watch says Saturday, and the next thirty people you ask say it’s Saturday… you would still believe it’s Friday?
If you think it’s Saturday after any amount of evidence, after assigning probability 1 to the statement “Today is Friday,” then you can’t be doing anything vaguely rational—no amount of Bayesian updating will allow you to update away from probability 1.
If you ever assign something probability 1, you can never be rationally convinced of its falsehood.
That’s not true. There are ways to change your mind other than through Bayesian updating.
Sure. But by definition they are irrational kludges made by human brains.
Bayesian updating is a theorem of probability: it is literally the formal definition of “rationally changing your mind.” If you’re changing your mind through something that isn’t Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you’re just wrong.
The original point was that human brains are not all Bayesian agents. (Specifically, that they could be completely certain of something)
… Okay?
Okay, so, this looks like a case of arguing over semantics.
What I am saying is: “You can never correctly give probability 1 to something, and changing your mind in a non-Bayesian manner is simply incorrect. Assuming you endeavor to be /cough/ Less Wrong, you should force your System 2 to abide by these rules.”
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
In which case we have no disagreement, though I would note that I intend to do as well as I can.
I wasn’t restricting the domain to the brains of people who intrinsically value being rational agents.
I am sorry, I must have been unclear. I’m not staying “yes, but”, I’m saying “no, I disagree”.
I disagree that “you can never correctly give probability 1 to something”. To avoid silly debates over 1/3^^^3 chances I’d state my position as “you can correctly assign a probability that is indistinguishable from 1 to something”.
I disagree that “changing your mind in a non-Bayesian manner is simply incorrect”. That looks to me like an overbroad claim that’s false on its face. Human mind is rich and multifaceted, trying to limit it to performing a trivial statistical calculation doesn’t seem reasonable to me.
I think the claim is that, whatever method you use, it should approximate the answer the Bayesian method would use (which is optimal, but computationally infeasible)
The thing is, from a probabilistic standpoint, one is essentially infinity—it takes an infinite number of bits of evidence to get probability 1 from any finite prior.
And the human mind is a horrific repurposed adaptation not at all intended to do what we’re doing with it when we try to be rational. I fail to see why indulging its biases is at all helpful.
Given that here rationality is often defined as winning, it seems to me you think natural selection works in opposite direction.
… Um. No?
I might have been a little hyperbolic there—the brain is meant to model the world—but...
Okay, look, have you read the Sequences on evolution? Because Eliezer makes the point much better than I can as of yet.
Regardless of EY, what is your point? What are you trying to express?
*sigh*
My point, as I stated the first time, is that evolution is dumb, and does not necessarily design optimal systems. See: optic nerve connecting to the front of the retina. This is doubly true of very important, very complex systems like the brain, where everything has to be laid down layer by layer and changing some system after the fact might make the whole thing come crumbling down. The brain is simply not the optimal processing engine given the resources of the human body: it’s Azathoth’s “best guess.”
So I see no reason to pander to its biases when I can use mathematics, which I trust infinitely more, to prove that there is a rational way to make decisions.
How do you define optimality?
LOL.
Sorry :-/
So, since you seem to be completely convinced of the advantage of the mathematical “optimal processing” over the usual biased and messy thinking that humans normally do—could you, um, demonstrate this advantage? For example financial markets provide rapid feedback and excellent incentives. It shouldn’t be hard to exploit some cognitive bias or behavioral inefficiency on the part of investors and/or traders, should it? After all their brains are so horribly inefficient, to the point of being crippled, really...
Actually, no, I would expect that investors and/or traders would be more rational than the average for that very reason. The brain can be trained, or I wouldn’t be here; that doesn’t say much about it’s default configuration, though.
As far as biases—how about the existence of religion? The fact that people still deny evolution? The fact that people buy lottery tickets?
And as far as optimality goes—it’s an open question, I don’t know. I do, however, believe that the brain is not optimal, because it’s a very complex system that hasn’t had much time to be refined.
That’s not good enough—you can “use mathematics” and that gives you THE optimal result, the very best possible—right? As such, anything not the best possible is inferior, even if it’s better than the average. So by being purely rational you still should be able to extract money out of the market taking it from investors who are merely better than the not-too-impressive average.
As to optimality, unless you define it *somehow* the phrase “brain is not optimal” has no meaning.
That is true.
I am not perfectly rational. I do not have access to all the information I have. That is why am I here: to be Less Wrong.
Now, I can attempt to use Bayes’ Theorem on my own lack-of-knowledge, and predict probabilities of probabilities—calibrate myself, and learn to notice when I’m missing information—but that adds more uncertainty; my performance drifts back towards average.
Not at all. I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists so long as my function follows certain criteria (mostly continuity.)
I can note that given the space of possible functions and metrics, the chances of my brain being optimal by any of them is extremely low. I can’t really say much about brain-optimality mostly because I don’t understand enough biology to understand how much energy draw is too much, and the like; it’s trivial to show that our brain is not an optimal mind under unbounded resources.
Which, in turn, is really what we care about here—energy is abundant, healthcare is much better than in the ancestral environment, so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it.
I don’t think you can guarantee ONE maximum. But in any case, the vastness of the space of all n-dimensional functions makes the argument unpersuasive. Let’s get a bit closer to the common, garden-variety reality and ask a simpler question. In which directions do you think human brain should change/evolve/mutate to become more optimal? And in these directions, is the further the better or there is a point beyond which one should not go?
Um, I have strong doubts about that. Your body affects your mind greatly (not to mention your quality of life).
Yes.
No, unless you define “rationally changing your mind” this way in which case it’s just a circle.
Nope.
The ultimate criterion of whether the answer is the right one is real life.
While I’m not certain, I’m fairly confident that most people’s minds don’t assign probabilities at all. At least when this thread began, it was about trying to infer implicit probabilities based on how people update their beliefs; if there is any situation that would lead you to conclude that it’s not Friday, then that would suffice to prove that your mind’s internal probability is not Friday.
Most of the time, when people talk about probabilities or state the probabilities they assign to something, they’re talking about loose, verbal estimates, which are created by their conscious minds. There are various techniques for trying to make these match up to the evidence the person has, but in the end they’re still just basically guesses at what’s going on in your subconscious. Your conscious mind is capable of assigning probabilities like 0.999999999.
Taking a (modified) page from Randaly’s book, I would define absolute certainty as “so certain that one cannot conceive of any possible evidence which might convince one that the belief in question is false”. Since you can conceive of the brain-in-the-vat scenario and believe that it is not impossible, I would say that you cannot be absolutely certain of anything, including the axioms and logic of the world you know (even the rejection of absolute certainty).
misattributed often to Plato
Jack Handey
So good even dead people want to drink it.
(Reference.)
To be fair, if you see a watering hole surrounded by skeletons, it probably means the water’s toxic.
That’s the joke.
Ah. I thought it was something like “I won’t drink from this because it’s reserved for skeletons (and will therefore die and perpetuate the cycle),” which was just bizarre enough to be a joke.
John C Wright
Is there a name for the fallacy of claiming to be an expert on the specific contents of other people’s subconsciouses?
This sounds like it implies that both things must be true. It seems to me that either would be sufficient to justify someone saying they believe nothing.
St. Francis of Assisi (allegedly)
A luxury, once sampled, becomes a necessity. Pace yourself.
Andrew Tobias, My Vast Fortune
--Delmore Schwartz, “Calmly We Walk Through This April’s Day”; quoted by Mike Darwin on the GRG ML
I like it when I hear philosophy in rap songs (or any kind of music, really) that I can actually fully agree with:
-- Vince Staples, “Versace Rap”
It’s quite sad that Tupac Shakur is the focus of so many conspiracy theories, because he was quite the sceptic about wasting your time on this stuff when there was real work to do making the world better.
I always thought it was interesting that Tupac got all the conspiracy theories while Biggie got none, despite the fact that Biggie released an album called Ready to Die, died, then two weeks later released an album called Life After Death. It’s probably because Tupac’s music appeals more to hippie types who are into this kind of stuff.
Anton Lavey, The Satanic Bible, The Book of Satan II
Isn’t it better to examine a falsehood to discover why it was so popular and appealing before throwing it away?
Then, to continue the metaphor, we should study it by telescope from afar, not as a present and influential entity in our own sphere of existence, but rather a distant body, informative but impotent, the object of curiosity rather than devotion.
— Jon Elster, Explaining Social Behavior: More Nuts and Bolts for the Social Sciences, p. 16
Only if they won’t let you throw it away.
James Wilson
Counter-quote.
Only loosely. The insightful part of the grandparent quote is the third sentence, which complements the moral-greyness issue quite well.
I think it is only slightly insightful, at best. It’s a gross simplification of how most people experience, and actually (under-the-hood) perform, moral calculations, and it simplifies away most of the interesting stuff.
Eric Raymond
Empirically, heaping scorn on everyone and seeing who sticks around leads to lots of time wasted on flame wars.
Straw man. The grandparent explicitly made the scorn conditional, not ‘on everyone’.
Failure to steel man. Replacing “everyone” with “people” leaves the basic point unchanged.
ETA: … or, I should say, leaves a point that (1) deserves reply and (2) was probably what the original hyperbolic version was getting at anyway.
Abuse of the ‘steel man’ concept and attempt to introduce a toxic social norm. I am strongly opposed to this influence.
MixedNuts attempts to refute a quote using a non-sequitur. Supporting a false refutation is not being generous, it is being biased. It is being unfair to the initial speaker.
So much so that it leaves the basic point a straw man.
Steel-manning a refutation does not equal supporting that refutation. In fact, steel-manning entails criticizing the original refutation, at least implicitly.
However, when a claim is plausibly intended to be a hyperbolic version of a reasonable claim, pointing out that the hyperbolic version is a straw man, without addressing the reasonable version, is mostly just poisoning the discourse.
(This charge doesn’t apply to you if you sincerely believed that MixedNuts was non-hyperbolically claiming that literally everyone has scorn heaped on them in the community under discussion, or that MixedNuts would be read that way by many readers.)
I oppose your influence in this context for the aforementioned reasons.
The point that you think is reasonable is still a straw man.
It would help me to understand why my version is a straw man if you would steel-man it. Then I could compare your steel man to my straw man and better feel the force of your criticism. (I certainly wouldn’t take you to be supporting my straw man, which seemed to be your earlier concern.)
As it stands, I am puzzled by your accusation because Eric Raymond said, “Let’s drive away people unwilling to adopt that ‘git’r’done’ attitude with withering scorn …”. Why is it a straw man to characterize this as “heaping scorn on people and seeing who sticks around”?
Is it because you read it as “heaping scorn on people randomly...”, rather than as “heaping scorn on people who are unwilling to adopt that ‘git’r’done’ attitude …”? Or is it something else?
There isn’t a convenient steel man available. Not all wrong (or, to be agnostic with respect to the correctness of our positions, disagreed with) positions have another position nearby in concept space that is agreed with (or, sometimes, disagreed with only with significant respect and more complicated reasoning).
Because that is a different described procedure. They are similar in as much as scorn is applied in both cases but the selection process for when scorn is applied is removed and the intended outcome is changed.
To illustrate, consider taking the required equivocation back in the other direction. We end up with:
This seems to be a different empirical claim. It is also a more controversial claim and one that is less obviously correct. I certainly wouldn’t expect scorn to be the optimal response in such circumstances but the claim that it wastes more time than the described alternative is still an empirical claim that would actually require empiricism to be done and cited. It isn’t something that I have seen anywhere.
This was a helpful comment.
I agree that, in general, wrong positions may lack steel-man versions. However, I am not convinced that this is the case here. Indeed, it seems to me that you provide just such a steel man in your comment.
You are reading “seeing who sticks around” as the reason why the scorn is being applied. This is a possible reading. It might be the intended meaning, but it might not. The intended meaning might just be that “seeing who sticks around” is an outcome, and not the intended outcome.
If the meaning was what you said, the sentence could have been written as “heaping scorn on people to see who sticks around”. That would have been equally concise and less ambiguous. Since that wasn’t what was written, your reading is less certain.
Refutations of straw men are usually obviously correct. That is why straw men are offered. The steel man version of the straw-man-based refutation will rarely be so obviously correct, but it will be obviously better. The steel man will be more relevant, raise more important issues, be more likely to move the conversation forward in a productive way, and so on.
You seemed to me to be offering just such a steel man when you wrote,
Yes, your version is a different empirical claim, but steel men are generally different claims from the original “unsteeled” version. Your version raises controversial issues, but that need not obviate productive discussion.
Most importantly, and as you point out, your steel man version raises empirical issues, which would help keep the conversation connected to reality. Moreover, addressing those empirical questions would probably require getting into the specific dynamics of the community under discussion. (What have the documented conversations in this specific community actually been like? What are the actual social dynamics and the actual history of how they’ve changed over time? What has this community accomplished, and under just what conditions, as a function of how much scorn was being applied? Etc.)
This would make the conversation far more likely to stay relevant to the actual matter at hand. The conversation would be more likely to stay at the object level, instead of floating in the meta level, where accusations of fallacies live.
To summarize, I think that what you offered is a good steel man of MixedNuts’s original claim for the following reasons:
It is recognizably related to what MixedNuts said, although it is different. Moreover, it is plausible that he could be convinced that this is what he should have said.
The antecedent (“driving away people unwilling to adopt that ‘git’r’done’ attitude with withering scorn, rather than waste our time pacifying tender-minded ninnies and grievance collectors”) is not a straw man.
It raises promising and empirically grounded points of disagreement, as I argue above.
I don’t believe that it does, and here’s why.
Heaping scorn on everyone and seeing who sticks around is a selection process; the condition for surviving is being able to accept scorn, whether or not such scorn is warranted by the value system of the society. This is somewhat similar to hazing.
Heaping scorn on a specific group of people for their unwillingness to adopt the values of the society (or, rather, some powerful subset of the society which has enough clout to control how things are run) is a selection process based on something of value to the society, and is more like punishment or selective admissions: people with the valued trait are encouraged, those without are allowed to leave.
It would appear that there are very different implications, as the former selects those who can take unjustified scorn (a quality of dubious value), and the latter selects for any demonstrable quantity desired by the society (in this case, a specific attitude towards problem-solving).
This is a good argument for the claim that MixedNuts’s hyperbolic version, read literally, misses something important. (Your argument convinces me, anyway.)
It is not clear to me that your argument addresses the “steel man” version in which “everyone” is replaced by “people who are unwilling to adopt that ‘git’r’done’ attitude”.
Eric Raymond isn’t suggesting that. Why are you?
A relevant example:
http://arstechnica.com/information-technology/2013/07/linus-torvalds-defends-his-right-to-shame-linux-kernel-developers/
Linux kernel seems to me a quite well-managed operation (of herding cats, too!) that doesn’t waste lots of time on flame wars.
I don’t follow kernel development much. Recently, a colleague pointed me to the rdrand instruction. I was curious about Linux kernel support for it, and I found this thread: http://thread.gmane.org/gmane.linux.kernel/1173350
Notice that Linus spends a bunch of time (a) flaming people and (b) being wrong about how crypto works (even though the issue was not relevant to the patch).
Is this typical of the linux-kernel mailing list? I decided to look at the latest hundred messages. I saw some minor rudeness, but nothing at that level. Of course, none of these messages were from Linus. But I didn’t have to go back more than a few days to find Linus saying things like, “some ass-wipe inside the android team.” Imagine if you were that Android developer, and you were reading that email? Would that make you want to work on Linux? Or would that make you want to go find a project where the leader doesn’t shit on people?
Here’s a revealing quote from one recent message from Linus: “Otherwise I’ll have to start shouting at people again.” Notice that Linus perceives shouting as a punishment. He’s right to do so, as that’s how people take it. Sure, “don’t get offended”, “git ’er done”, etc—but realistically, developers are human and don’t necessarily have time to do a bunch of CBT so that they can brush off insults.
Some people, I guess, can continue to be productive after their project leader insults them. The rest either have periodic drops in productivity, or choose to work on projects which are run by people willing to act professionally.
tl;dr: Would you put up with a boss who frequently called you an idiot in public?
Actually, that depends.
Mostly that depends on what the intent (and context) of calling me an idiot in public is. If the intent is, basically, power play—the goal is to belittle me and elevate himself, reassert his alpha-ness, shift blame, provide an outlet for his desire to inflict pain on somebody—then no, I’m not going to put up with it.
On the other hand, if this is all a part of a culturally normal back-and-forth, if all the boss wants is for me to sit up and take notice, if I can without repercussions reply to him in public pointing out that it’s his fat head that gets into his way of understanding basic things like X, Y, and Z and that he’s wrong—I’m fine with that.
The microcultures of joking-around-with-insults exist for good reasons. Nobody forces you to like them, but you want to shut them down and that seems rather excessive to me.
I think it’s pretty clear that Linus is more on the power-play end of the spectrum. Notice his comment above about the Android developer; that’s not someone who is part of his microculture (the person in question was a developer on the Android email client, not a kernel hacker). And again, the shouting-as-punishment thing shows that Linus understands the effect that he has, but doesn’t care.
Also, Linus, as the person in the position of power, isn’t in a position to judge whether his culture is fun. Of course it’s fun for him, because he’s at the top. “I was just joking around” is always what bullies say when they get called out. The real question is whether it’s fun for others. The recent discussion (that presumably sparked the quotes in this thread) was started by someone who didn’t find it fun. So even if there are some “good reasons” (none of which you have named), they don’t necessarily outweigh the reasons not to have such a culture.
That’s not clear to me at all.
Note that management of any kind involves creating incentives for your employees/subordinates/those-who-listen-to-you. The incentives include both carrots and sticks and sticks are punishments and are meant to be so. If you want to talk about carrots-only management styles, well, that’s a different discussion.
I disagree. You treat fun and enjoyment of working at some place as the ultimate, terminal value. It is not. The goal of working is to produce, to create, to make. Whether it’s “fun” is subordinate to that. Sure, there are feedback loops, but organizations which exist for the benefit of their employees (to make their life comfortable and “fun”) are not a good thing.
For what it’s worth, I’ve never worked at a place that successfully used aversive stimulus. And, since the job market for programmers is so hot, I can’t imagine that anyone would willingly do so (outside the games industry, which is a weird case). This is especially true of kernel hackers, who are all highly qualified developers who could find work easily.
I would point out that Linus Torvalds’s autobiography is called “Just for Fun”. Also, Linus doesn’t have employees. Yes, he does manage Linux, but he doesn’t employ anyone. I also pointed out a number of ways in which Linus’s style was harmful to productivity.
Ahem. I think you mean to say that you never touched the electric fence. Doesn’t mean the fence is not there.
Imagine that someone at your workplace decided not to come to work for a week or so, ’cause he didn’t feel like it. What would be the consequences? Are there any, err… “aversive stimuli” in play here?
No need for imagination. The empirical reality is that a lot of kernel hackers successfully work with Linus and have been doing this for years and years.
Which means that anyone who doesn’t like his style is free to leave at any time without any consequences in the sense of salary, health insurance, etc. The fact that kernel development goes on and goes on pretty successfully is evidence that your concerns are overblown.
As of 2012-04-16, 75% of kernel development is paid. I would assume those developers would find their jobs in jeopardy if Linus removed them from development.
Um, Linux kernel doesn’t work like that. Linus doesn’t “add” anyone to development or “remove” anyone. And I don’t know if companies who pay the developers would be likely to fire them if the developers’ patches start to get rejected on a regular basis.
Oh, and you misquoted your source. It’s not 75% of developers, it’s 75% of the share of kernel development and, of course, some developers are much more prolific than others.
Certainly he and his team are less likely to accept patches from people who they’ve had trouble with in the past? And people who have trouble getting patches accepted (for whatever reason) are probably not going to be paid to continue doing kernel development?
It would surprise me if he’s never outright banned anyone.
Thanks for the correction, edited my comment above.
You are describing a (dubious) difference in word use, not a difference in how the world works.
I don’ t think so—it is a difference in how the world works. Anyone in the world can submit kernel patches. The filtering does not occur at the people level, it occurs at the piece-of-code level.
Linus does not say “I pronounce you a kernel developer” or “You’re no longer a kernel developer”—he says “I accept this patch” or “I do not accept this patch”.
No, I mean that touching the electric fence did not make me a more productive worker.
I’m not saying that Linus’s style will inevitably lead to instant doom. That would be silly. I’m saying that it’s not optimal. Linux hasn’t exactly taken over the world yet, so there’s definitely room for improvement.
It’s important to distinguish between Linux the operating system kernel, and the complete system of GNU+Linux+various graphical interfaces sometimes called “Linux”.
The Linux kernel can also be used with other userspaces, eg. Busybox or Android, and it’s very popular in these combinations on embedded systems and phones/tablets respectively. GNU+Linux is popular on servers. The only area where Linux is unsuccessful is desktops, so it’s unfortunate that desktop use is so salient when people talk about “Linux”.
Linus only works on the kernel itself, and that’s making great progress towards taking over the world.
Yes, I used to work for RMS; I am well aware of the difference. I should also note that most of the systems you mention use proprietary kernel modules; it would be better if they didn’t, and perhaps if Linus’s attitude were different, there would be more interest in fixing the problem.
Also, desktops are where I spend most of my time, so I think they still matter a lot.
I use GNU+Linux on the desktop myself, and I share RMS’s goals, although I’m willing to make bigger compromises for the sake of practicality than him. Linus does not share RMS’s goals, so my point is that from Linus’s point of view his management techniques are highly effective.
Pure hypothesis: Linux being unsuccessful on desktops is not a coincidence, because Linux is written in a low-empathy environment, but writing UI for the general public means that you don’t get to blame users when they don’t like your software.
Possible test: Firefox is fairly good open source software for the general public. What’s the culture at Mozilla/Firefox like for the programmers?
Um. The claim by novalis is that the Linux kernel is written in a “low-empathy” environment. The kernel has nothing to do with UI which, along with most applications, is quite separate. Linus has no influence over UI design or user-friendliness in general.
There are two main GUI environments on Linux—Gnome and KDE. I don’t know what the atmosphere is for developers inside these organizations. I think there is a fair amount of infighting and office politics, but I have no clue if they are polite and tactful about it.
You know what Ubuntu is named after, BTW?
Yes, I do, though I don’t see the relevance.
(Evidence about whether the Ubuntu people are ‘friendly’.)
It’s evidence in the same sense that the name of product like Repairwear Laser Focus Wrinkle & UV Damage Corrector is evidence that this face cream laser focuses your wrinkles and corrects your UV damage 8-/
“Ubuntu”, by the way, means a lot more than friendliness.
How do you know?
How do you know? (other than in a trivial sense that anything in real life is not going to be optimal)
You’re making naked assertions without providing evidence.
Well, I can tell you that afterwards, I felt like shit and didn’t get much done for a while. Or I started looking for a new job (whether or not I ended up taking one, this takes time and mental energy away from my current job). And getting yelled at has never seemed to me to correlate with me actually being wrong, so I’m not clear on how it would have changed my behavior.
Upthread, you linked to an article which quotes someone saying, “Thanks for standing up for politeness/respect. If it works, I’ll start doing Linux kernel dev. It’s been too scary for years.” I also pointed out, in my discussion of the rdrand thread, that Linus wastes a bunch of time by being cantankerous. And speaking of the rdrand thread (which I swear I didn’t choose as my example for this reason; I really did just stumble across it a few weeks ago), your linked article also quoted Matt Mackall, whom Linus yelled at in that thread: he’s no longer a kernel hacker. Is Linus’s attitude why? Well, he’s complained about Linus’s attitude before, and shortly after that thread, he ceased posting on LKML. And he’s probably pretty smart—he wrote Mercurial -- so it’s a shame for the kernel to lose him.
I can tell you that I, personally, would be uninterested in working under Linus, although kernel development isn’t really my area of expertise, so maybe I don’t count.
I hope you didn’t take my position to be that yelling at people is always the right thing to do. There certainly is lots of yelling which is stupid, unjustified, and not useful in any sense.
The issue is whether yelling can ever be useful. You are saying that no, it can never be. I disagree.
The secondary issue is whether Linus runs kernel development in a good/proper/desirable/productive way. The major question here is the metric—how do we decide what is a “good/… way”. From your point of view, if you define a good way as “fun” for developers, then sure, it probably is possible to run the kernel in a more fun way.
From my point of view, the proof of the pudding is in the eating. Is the kernel a good piece of software? I would argue that it is, and that it is a remarkably successful piece of software. More, I would argue that Linus deserves a lot of credit for making it so. Given this, I’m suspicious of claims that Linus’ way is “non-optimal”, especially if there is the strong underlying current of “I, personally, don’t like it”.
No, the issue is whether Linus’s yelling is useful, or, whether yelling is generally useful enough in free/open source projects that it outweighs the costs. Specifically, whether “Let’s drive away people unwilling to adopt that “git’r’done” attitude with withering scorn, rather than waste our time pacifying tender-minded ninnies and grievance collectors. That way we might continue to actually, you know, get stuff done.” is good or bad advice.
You should be even more suspicious, then, of Linus saying that it’s necessary and proper, given that he’s said that he, personally, does like it.
Do you think we have a basic difference in values or there’s some evidence which might push one of us towards the other one’s position?
He has the huge advantage in that he actually delivered and continues to deliver. His method is known to work. Beware the nirvana fallacy.
That’s a pretty good question.
Hypothesis: I think some of it might be a case of the “Typical Mind Fallacy”. Maybe if Linus yelled at you, you wouldn’t be bothered at all. But I know that my day would be ruined, and I would be less productive all week. So I assume that many people are like me, and you assume that many people are like you.
I would be curious about a controlled experiment, where free/open source project leaders were told to act more/less like Linus for a month to see what would happen. But I guess that’s pretty unlikely to happen. And one confounder is that a lot of people might have already left (or never joined) the free/open source community because of attitudes like Linus’s. We could measure project popularity (say, by number of stars on github) against some rating of a project’s friendliness.
We might also survey programmers in general about what forces do/don’t encourage them to work on specific free/open source projects.
I’m sure there are studies available of what sorts of management are effective generally. I’ll ask my MBA friend. I did a two-minute Google search for studies about what cause people to leave their jobs generally, but found a such a variety of conflicting data that I decided it would need more time than I have.
These things could definitely influence me to change my mind.
I also think there might be a value difference, in that I do value fun pretty highly. That’s especially true in the free/open source world, where nobody’s getting rich, and where a lot of people are volunteers (this last is less true on Linux than on some other projects, but perhaps part of that is that all of the volunteers have been driven away)? But in general, I would like to enjoy the thing I spent eight (or twelve) hours a day on. And if even if this did make me somewhat less productive than I would be if I was less happy, I don’t really mind that much.
Yes, I think the Typical Mind Fallacy plays some role in this. But then let’s explicitly go around it. Let’s postulate that the population of, say, qualified programmers, is diverse. Some are shy wallflowers, wilting from any glance they perceive as disapproving, some thrive in a rough-and-tumble environments where you prove your solution is better by smashing your opponent into bits. Most are somewhere in between.
This diverse population would self-sort by preferences—the wallflowers would gravitate towards polite, supportive, never-a-harsh-word environments (in our case, OSS projects), while the roar-and-smash types will gravitate towards the get-it-done-NOW-you-maggot environments. Since OSS projects are easy to create and it’s easy for developers to move from project to project, the entire system should evolve towards an equilibrium where most people find the environment they’re comfortable with and stick with it.
Now, that seems to me a fine way for the world to work. But would you object to such a state of the world, after all, there are some projects there which are “mean” and where you (and likely some other people) would be uncomfortable and unproductive?
Oh, there are piles and piles of those. The only problem is, they all come to different conclusions (with a strong dependency on the decade in which the study was done).
Put yourself into manager’s shoes and consider the difference between instrumental and terminal values.
You, an employee/contributor, value fun highly. That is a terminal value for you. Being productive is a secondary goal and may also be an instrumental value (some but not all people are not having fun if they see themselves as being unproductive).
Now, for a manager, the fun of his employees/contributors/developers is NOT a terminal value. It’s only an instrumental value, the true terminal value is to Get Shit Done.
Do you see how that leads to different perspectives?
Creating projects is easy; forking is hard. And nobody wants to create a new kernel from scratch. Kernel hackers don’t really have a lot of options. So I don’t think your theoretical world has anything to do with the real world. Also, it seems to me that culture doesn’t end up contained within a single project; Linux depends on GCC, for instance, so the Linux people have to interact with the GCC people. Which means that culture will bleed over. I was recently at a technical conference and a guy there said, “yeah, security is perhaps the only community that’s less friendly than Linux kernel development.” So now it’s not just one project that’s off-limits, but a whole field.
I also don’t think there are necessarily any actual roar-and-smash types. That is, I think a fair number of people think it’s fun to lay a beatdown on some uppity schmuck. I’ve experienced that myself, certainly. Why else would anyone bother wasting time arguing with creationists? But I’m not sure there are a lot of people who find it fun to be on the losing end of this. This is an extension of Arguments as Soldiers. When you’re having a knock-down, drag-out fight with someone, it’s harder to back down.
Notice that the original example of a person in that category was Mannie O’Kelly—a fictional character.
[Linus]:
(later in that email, he does give a nod to effectiveness, but that doesn’t seem to be his primary motivator).
I think it remains an open question whether Linus’s style is in fact better than the alternative from the “get shit done” perspective. And the original quote implied, without evidence, that in fact it is. Not really sure why this is a “rationality” quote.
Forking is pretty easy—it’s getting people to follow your fork that’s hard.
Well, there are certainly enough programmers who prefer to discuss code in terms of “only a brain-dead moron could write a library that does foo” or “why is this retarded object making three fucking calls to the database for each invocation”, etc.
And while people generally don’t find it fun to be on the losing side, this does not stop them from seeking and entering competitions and competitive spheres. Consider sports, e.g. boxing or martial arts.
Steelman this. I am pretty sure that in the North European culture being “subtle or nice” is dangerously close to being dishonest. You do not do anyone a favour by pretending he’s doing OK while in reality he’s clearly not doing OK. There is a difference between being direct and blunt—and being mean and nasty.
As I said, Linus’ style is proven to work. We know it works well. An alternative style might work better or it might not—we don’t know.
I suspect you have a strong prior but no evidence.
I don’t understand what you’re saying here. Are you saying that anyone is proposing that Linus to act in a way that he would see as dishonest? Because I don’t think that’s the proposal. Consider the difference between these three statements:
Only a fucking idiot would think it’s OK to frobnicate a beezlebib in the kernel.
It is not OK to frobnicate a beezlebib in the kernel.
I would prefer that you not frobnicate a beezlebib in the kernel.
The first one is rude, the second one is blunt, the third one is subtle/tactful/whatever. Linus appears to think that people are asking for subtle, when instead they’re merely asking for not-rude. Blunt could even be:
When you frobnicate a beezlebib, it fucks the primary hairball inverters, so never do that.
So he doesn’t even have to stop cursing.
There are many FOSS projects that don’t use Linus’s style and do work well. What’s so special about Linux?
I’ve run a free/open source project; I tried to run it in a friendly way, and it worked out well (and continues to do so even after all of the original developers have left).
I can also point to Karl Fogel’s book “Producing Open Source Software”, where he says that rudeness shouldn’t be tolerated. He’s worked on a number of free/open source projects, so he’s had the chance to experience a bunch of different styles.
We keep hitting the Typical Mind Fallacy over and over again :-)
Let me offer you my interpretation: the first one is blunt and might or might not be rude, depending on what the social norms and context are (and on whether thinking about frobnicating the beezlebib does provide incontrovertible evidence of severe brain trauma). The second one is not blunt at all, it’s entirely neutral. The third one is a slighly more polite version of neutral. Your fourth example is still neutral, by the way—there’s nothing particularly blunt about explaining why something should not be done (or about using four-letter words, for that matter).
To contrast I’ll offer my examples:
(rude) You are a moron and can’t code your way out of a wet paper bag! Stuff your code where the sun don’t shine and never show it to me again!
(blunt) This is not working and will never work. You need to scrap this entirely and start from scratch.
(subtle) While this is a valuable contribution, we would really appreciate it if you went and twiddled the bogon emitter for us while we try to deal with the beezlebib frobnication on our own.
It’s only the most successful open software ever. Otherwise, not much :-P
I recently came across this, which seems to have some evidence in my favor (and some irrelevant stuff): http://www.bakadesuyo.com/2013/10/extraordinary-leader/
A more direct approach might be: “no patches which frobnicate a beezlebib will be accepted”.
I would say the size (in terms of SLOC count), scope (everything from TVs to supercomputers), lack of a equivalent substitute (MySQL or Postgres? Apache or Nginx? Linux or… BSD?), importance of correctness (its the kernel, stupid), and commercial involvement (Google, Oracle, etc.) make it very different from most FOSS projects. Mostly I’d say the size, complexity and very low tolerance of bugs.
I have no idea if Linus’s attitude is helpful or not. I tend to think he could do better with more direct, polite approaches like the above, but I don’t hold that belief very strongly.
Posts like this encourage me to remark that I want to have a website where I feel free to respond to others’ actual words, not by how I’d rationalize those words if I were personally committed to them.
I agree. My further comments shouldn’t detract from this fact.
I don’t agree. Every CS student and their mother wants to write their own OS. There are a lot [of] projects out there.
As to the effectiveness of the community, there’s an important datapoint. BSD came before Linux, but Linux took over the world. I think this is generally attributed to a more vibrant community of developers.
The right comparison is to compare that to how much you’d be bothered if you had to clean up the mess left by an incompetent coworker. Or having to deal with an incompetent bogon in middle management.
Unsurprisingly, I’ve had to deal with both of these things. It has never seemed to me that yelling at someone could make them more competent. Educating them, or firing them and replacing them seems like a better plan.
The issue is whether the person in question would have been a productive contributor.
Well Bill Gates and Steve Jobs have similar reputations.
Bill Gates failed to create an organization that would thrive in his absence. We’ll see how Steve Jobs did in a few more years (it seems likely that he did better, but he also had the famous “reality distortion field”, which Linus doesn’t). Steve Jobs also got kicked out of his own company for a bunch of years.
During which time the company tanked.
In any case, your argument was that Linus might have better succeeded in “taking over the world” if he had used a less confrontational style. My point is that the people who did “take over the world” used the same style.
Punishments seem to have rapidly decreasing returns, especially given the availability of alternatives that are less abusive. Otherwise we’d threaten to people when we wanted to make them more productive, rather than rewarding them—which most of the time we don’t above a low level of performance.
This is a shift of topic—heaping scorn is one particular sort of punishment. Firing someone who isn’t working after having given them several warnings is a punishment, but it isn’t the same as a high-flame environment.
I don’t understand the point that you are arguing.
Basically all human groups—workplaces, societies, countries, knitting circles—have punishments for members who do unacceptable things. The punishments range from a stern talking to, ostracism, or ejection from the group to imprisonment, torture, and killing.
In which real-life work setting you will not be punished for arbitrarily not coming to work, for consistently turning in shoddy/unacceptable results, for maliciously disrupting the workplace?
Of course all societies have punishments, but that doesn’t address the point you were responding to which was that Linus was more on the power-play end of the spectrum. The ratio of reward to punishment, your leverage as determined by the availability of viable alternatives, matters in determining which end of that spectrum you’re on.
And that has implications for the quality of work you can get from people—while you may be punished for blatantly shoddy work, you’re not going to be punished for not doing your best if people don’t know what that is. The threat of being fired can only make people work so hard.
Um. How do you determine the ratio of reward to punishment for Linux kernel developers?
Also whether you engage in power play is determined by your intent, not by ratio or leverage. Those determine the consequences (accept/revolt/escape) but not whether the original critique was legitimate or purely status-gaining.
You bring up some good points. I would go so far as to say that given a) the amount of subjective interpretation from the observers, b) the limited number of first-hand witnesses, and c) the difficulty of comparing the small number of sample societies for which we have observers, that in the absence of evidence roughly the strength of a formal study, this thread may not be able to reach an agreeable conclusion for lack of data.
The claim, as I understand it, is that the culture trades off fun for productivity. A common example given is Apple, where Steve Jobs was a hawk that excoriated his underlings, and thus induced them to create beautiful, world-conquering products.
Also that the culture selects for the people who find being productive fun.
While the more socially enlightened attitudes lead to very effective and high signal-to-noise conflict handling, as can be observed on Tumblr and MetaFilter?
Here’s my thought process upon reading this. (Initially, I assumed “git ‘er done” meant something like ’women are unimportant except as sex objects, and I misread “unwilling” as “willing”.)
‘How comes that guy, who when talking about sex on his blog gets mind-killed to the point of forgetting how to do high-school maths, makes so much sense everywhere else? Maybe he was saner when younger, then got worse with age, or something.’ I follow the link, expecting it to go to somewhere other than Armed and Dangerous, e.g. somewhere on catb.org.
I notice the link does go to his blog, and to a recent post at that. ‘So he is still capable of talking sense about such topics after all?’ I notice I am confused.
I realize he said “unwilling” not “willing”. ‘Er… Nope. He’s crazy as usual.’
Appalled at the idea that anyone, even ESR, would say anything like that in public with an almost straight face, I decide to look “git ’er done” up. ‘Oh, that makes perfect sense, and I agree with him. But that’s not about sex (except insofar as the cut-through-the-bullshit communication style is less rare among men than among women), so that doesn’t actually show he’s not mind-killed beyond all repair.’
(Anyway, if an adult woman complains because you called her a girl, the course of action that leaves you the most time to get stuff done is apologizing, not doing that again, and getting back to work, not endlessly whining about how ridiculous the PC crowd are.)
Not necessarily, it might just encourage further frivolous complaints.
As opposed to feeding trolls, which is widely known to be extremely effective in making them shut up?
In the context the group you position here as ‘trolls’ are described as frivolous complainers. You advocate apologising and complying. Eugine is correct in pointing out that this can represent a perverse incentive (both in theory and in often observed practice).
I dunno… if someone’s goal is to fuel a flamewar to discredit you, it would seem to me that ranting about that is more likely to make their day than just reacting as though they had pointed out you misspelled their name and then going back to your business.
The courtesy rules at LW are pretty strict. I don’t know whether things are different at CFAR and MIRI, but does insufficient scorn interfere with things getting done?
We use the karma system for that.
LW uses a karma system. I assume that CFAR and MIRI include a lot of in person and private conversation which isn’t subject to a karma system.
How do you think the effectiveness of cultures which have karma + courtesy compares to cultures which permit flaming?
In the thread, there were at least a couple of examples of high-verbal-abuse programming cultures (Apple and Linux) which get significant amounts of useful work done, and I think there were more.
I don’t believe that scorn just gets dumped on people who don’t have a git’r’done attitude—there have certainly been flame wars about the best programming language and operating systems, and no doubt about other legitimate differences of opinion.
Still, I’m wondering about successful programming environments which enforce courtesy rules. The only one I can think of is dreamwidth from its self-description. Running a livejournal clone isn’t nothing, but it also isn’t as much as inventing new products. Any others?
So I asked a friend about courteous programming environments, and he mentioned a couple that he’s worked at:
Webmethods, renamed as Novell Business Service Management Managed Objects at Software AG
Anyone know where Google fits on the courtesy to flame spectrum? How about Steam?
There is a bit of a difference between commercial, for-profit companies (especially public ones) and FOSS projects.
-- B. F. Skinner, Beyond Freedom and Dignity
Very close. I’d perhaps suggest that a person is less dignified when desperately seeking a reward that certainly isn’t going to come.
-- Will Wildman, analysis of Ender’s Game
Occam’s Law of Bullshit
I found this to be slightly unsettling when I realized it, though we may be talking about different things.
-The Great Learning, one of the Four Books and Five Classics of Confucian thought.
It is not July. It is August.
Saw this under “latest rationality quotes” and was like “man, I’m really missing the context as to how this is a rationality quote.”
“If it July, I desire to believe it is July. If it is August, I desire to believe it is August...”
If the Romans had been more willing to rename months they were unwilling to keep in their original places, we might have a much saner calendar.
If people in the 1500 years since the Romans had been more willing to rename months...
Now you’ve got me thinking about the minimum level of rationality/processing power necessary to determine the month accurately…
Fixed! The perils of copy/paste.
There are no happy endings. Endings are the saddest part, So just give me a happy middle And a very happy start.
-Shel Silverstein
But but peak/end rule!
X will never reach [arbitrary standard], so let’s not try to improve X.
I think the point is not that endings are generally and extrinsically sad, but rather that by definition, an ending is a thing which is sad, if we take the existence of such a thing to be good. (The ending of a bad thing, for example, is an exception, though generally because it allows for the existence of good things). The response, then, would not to be to try to improve endings, but rather to try to do away with them (and, barring that, improve the extrinsic qualities of the non-ending parts).
.
When a concept is inherently approximate, it is a waste of time to try to give it a precise definition.
-- John McCarthy
Thus, whenever you look in a computer science textbook for an algorithm which only gives approximate results, you will find that the algorithm itself is very vaguely specified, since the result is just an approximation anyway.
(I would have said: “When a concept is inherently fuzzy, it is a waste of time to give it a definition with a sharp membership boundary.”)
Thus we merely require citizens to “be responsible adults” before they can vote rather than give a sharp boundary such as 18 years old, college applications tell you “don’t write a long, rambling essay” rather than enforce a 500-word limit, and food packaging specifies “sometime in September” for the expiration date.
Sharp membership boundaries are useful to make it easy to test for the concept. Even if the concept is fuzzy and the test is imperfect, this doesn’t need to be a waste of time.
Sharp membership boundaries, however, often result in people forgetting the fuzziness of the concept—there are some people who vote without being responsible adults, because they can; an essay can be boring and rambling at 450 words or impressive and concise at 600; and food can be good a bit past its expiration date (it doesn’t usually go in the other direction in my experience, presumably because the risk of eating spoiled food vastly outweighs the risk of mistakenly tossing out good food, so expiration dates are the very early estimates).
Though sometimes it’s even more useful to acknowledge that the sharp-boundaried concept we’re testing for is different from, though perhaps expected to be correlated with in some way, the fuzzy concept we were initially interested in.
That helps us avoid the trap of believing that 17-year-olds aren’t responsible adults but 18-year-olds are, or that 550-word essays are long and rambling but 450-word essays aren’t, or that food is safe to eat on September 25 but not on September 29. None of that is true, but that’s OK; we aren’t actually testing for whether voters are responsible adults, essays are long and rambling, or food is expired.
Just because humans do it doesn’t mean it’s a good idea.
To clarify, I also think all of these are good ideas; not necessarily the best possible, but definitely useful.
It doesn’t prove it’s a good idea, but it’s evidence in its favour.
Well, sure. But that doesn’t mean it’s very strong evidence: I’d expect to see an average human (or nation) do something stupid almost as often as they do something intelligent.
We are obviously starting from very different premises. To me, the fact that lots of people do something is very strong evidence that the behaviour is, at least, not maladaptive, and the burden of proof is very much on the person suggesting that it is. And the more widespread the behaviour, the stronger the burden.
Alternatively, you could just look at the evidence. When legal systems have replaced bright-line rules with 15-factor balancing tests, has that led to better outcomes for society as a whole? Consider in particular the criteria for the Rule of Law. In the mid-20th century, co-incident with high modernism and utilitarianism, these multi-part, multi-factor balancing tests were all the rage. Why are they now held in such disdain?
Unfortunately, the fact that lots of people do something may merely be an indication of a very successful meme: consider major religions.
I will certainly grant that having a sharp restriction is better than a 15-factor balancing test, but I’m not arguing for 15-factor balancing tests.
I’d go further, but I’ve just noticed that I don’t really have much evidence for this belief, and I should probably go see how accomplished Chinese universities (which judge purely off the gaokao) are versus American universities first.
Huh? No it doesn’t. It says an entirely different thing.
Alfred, Lord Tennyson, Ulysses
The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence.
Fred P. Brooks, No Silver Bullet
I’ve always had misgivings about this quote. In my experience about 90% of the code on a large project is an artifact of a poor requirement analysis/architecture/design/implementation. (Sendmail comes to mind.) I have seen 10,000-line packages melting away when a feature is redesigned with more functionality and improved reliability and maintainability.
This is true, but the connotations need to be applied cautiously. Complexity is necessary, but it is still something to be minimised wherever practical. Things should be as simple as possible but not simpler.
More concretely, sometimes software can be simplified and improved at the same time.
This isn’t necessarily true if the complexity is very intuitive. If it takes ten thousand lines of code to accurately describe the action “jump three feet in the air”, then those ten thousand lines of code are describing what a jump is, what to do while in mid-air, what it means to land, and other things that humans may grasp intuitively (assuming that the actor is constructed in a manner similar to a human).
Additionally, there are some complex features which are not specific to the software. We don’t need to describe how a particular program receives feedback from the motor and sensors, how it translates the input of its devices, if these features are common to most similar programs—the description of those processes is part of the default, part of the background that we assume along with everything else we don’t need to derive from fundamental physics.
In other words, the complexity of software may correspond to a feature which humans may be able to understand as simple—because we have the prior knowledge necessary, courtesy of common nature and nurture. A full description of complexity is necessary if and only if it is surprising to our intuition.
That is, in some sense, his point—a phrase like “jump three feet in the air” does abstract most of the computational essence, making it seem like a trivial problem what it really, really isn’t.
Le Bovier de Fontenelle
This explains all those urges I get to burn witches, my talent at farming, all my knowledge at hunting and tracking and my outstanding knack for feudal political intrigue.
(Composition is not the relationship to previous minds that education entails. Can someone think of a better one?)
Derivation.
Much better.
We rest upon the frontal lobes of giants.
Is that a praise of educated minds, or a caution against too readily classifying a mind as educated?
(Possibly related: http://lesswrong.com/lw/1ul/for_progress_to_be_by_accumulation_and_not_by/)
I read it as expressing the same view as The Neglected Virtue of Scholarship.
From the description of him on Wikipedia, I am certain it is the former, although the bone wedrifid picks with “composed” is symptomatic of where he falls short of his contemporary, Voltaire. He was a most refined, civilised, intelligent, and educated writer, very popular among the intellectual class, and achieved memberships of distinguished academic societies, but his strength, a great one indeed, was in writing well on what was already known, and he created little that was new. Voltaire’s name lives to this day, but Fontenelle’s, while important in his time, does not.
Scholarship is indeed a virtue, but Fontenelle’s was not in service of a higher goal.
Sarah Hoyt
I see small examples everywhere I look; they’re just too specific to point the way to a general solution.
James Portnow/Daniel Floyd
Josh Billings
(h/t Robin Hanson)
Famously subverted by Ronald Reagan as:
How is that a subversion? It is exactly in accord with the original.
The key phrase is “our liberal friends.” Everyone suffers from illusion of transparency, Dunning-Kruger, and etc., but Reagan is applying the bias selectively.
More of an anti-death quote, but:
““Must I accept the barren Gift?
-learn death, and lose my Mastery?
Then let them know whose blood and breath
will take the Gift and set them free:
whose is the voice and whose the mind
to set at naught the well-sung Game-
when finned Finality arrives
and calls me by my secret Name.
Not old enough to love as yet,
but old enough to die, indeed
—the death-fear bites my throat and heart,
fanged cousin to the Pale One’s breed.
But past the fear lies life for all-
perhaps for me: and, past my dread,
past loss of Mastery and life,
the Sea shall yet give up Her dead!
.....
So rage, proud Power! Fail again,
and see my blood teach Death to die!”
—The Silent Lord, Deep Wizardry, Diane Duane
--Daniel Dennet Consciousness Explained
-- Stanislaw Lem, White Death
(as far as I know, this sweet short story never have been translated into English; I translated this passage myself from my Russian copy, so I will be glad if someone corrects my mistakes)
Not quite seeing the applicability as a rationality quote; but in “it’s bed” you should drop the apostrophe.
I’d say it’s highlighting the human fallacy to try to ignore and escape from bad news. Instead of facing this prophecy, they just destroyed the ship that delivered it to them and told themselves they were safe.
Actually, prophesy was about the ship; the spaceship crashed into Aragena, their planet, and then curious inhabitants looked inside (and found nothing dangerous). After that came the messenger of their King and told them that they all are doomed.
And they indeed were.
I imagine there’s an implied “and then the Reapers came” or something.
Probably I’m incredible late with that, but:
a) thank you, embarrassing mistake fixed
b) I was fascinated with the “volatile atoms” bit. It feels like a line taken from a poem on reductionism. I’m not sure that I managed to convey it because I’m not so much versed in English fiction and poetry.
Also, I liked their safety measures, it’s a pity they hadn’t worked in the end.
-Ledaal Kes (Exalted Aspect Book: Air)
Are they a villain who “solves” people by removing them from their way?
(Alternative response: Does “everything” include the puzzle of identifying something that can’t be reduced to a puzzle?)
… You can remove people as problems without doing so euphemistically, i.e. killing them.
If you befriend them, for example.
And, well, yes. That does count as a puzzle.
The statement just seems weird without any context, I guess. It certainly isn’t narrow.
Would you trust an AI that was being friendly to you as an attempted “solution” to the “puzzle” you presented?
That depends, what sort of solution is it trying to find? If it’s trying to maximize my happiness, that’s all fine and dandy; if it’s trying to minimize my capacity as an impediment to its acquisition of superior paperclip-maximizing hardware, I would object. Either way, I base my trust on the AI’s goal, rather than its algorithms (assuming that the algorithms are effective at accomplishing that goal).
Well, no, but I would never trust an AI if I couldn’t prove (or nobody I trusted could prove) it was Friendly with respect to me, period.
… not that it would much matter, but..
Also, relevance? I’m not really understanding your point in general. Certainly, problems need to be solved, but I would hope that your morality is included as a constraint...
But not necessarily if you’re a fictional character, hence my initial question. I think my point is that I’m not convinced the quote actually means anything, either in its original context or in its use here; it’s sounding like “everything” just means “things for which the statement is true”.
Still don’t understand. By definition, if something is hampering you, it presents a problem: sometimes the solution is “leave it alone, all possible ‘solutions’ are actually worse,” but it’s still something that bears thinking about.
It is somewhat tautological, I’ll grant, but us poor imperfect humans occasionally find tautologies helpful.
This is similar to how I’ve interpreted it. The character comes from a pre-enlightenment society, and is considered one of the greatest intelligence agents largely due to his ability to get results where nobody else can. He privately attributes this success to a rational mind and extensive [chess] skill that trains him to approach things as though they can be solved. While “stop and think about problems like they were games to be won instead of chores to be blamed on someone else” may seem obvious to people used to thinking like that, it’s a major shift for most people.
Robert Wright, The Moral Animal
--Alfred Korzybski Science and Sanity Page 376 (1933)
Interesting, if indeed it is true. I’m not sure how this is supposed to be a rationality quote though.
It a quote about thinking about how to think. It not the standard way of thinking around here but thinking interesting thoughts about thinking encourages rationality.
In sense that you should be searching for the truth in both directions
Fixed, thanks.
Ludwig Wittgenstein, Tractatus Logico-Philosophicus 5.47321
Though the work of an apologist, I thought this was a good litany against turning oneself into a paper-clip maximiser.
-- Albert Einstein
Glenn Reynolds
I’m downvoting this quote. Read at a basic level, it supports a particular economic theory rather than a larger point of rationality.
For the record, the Austrian Business Cycle Theory is not generally accepted by mainstream economists. This isn’t the place to discuss why, and it isn’t the place to give ABCT the illusion of a “rational” stamp of approval.
I read it as an extension of Gendlin; the damage comes from living in the untrue world, not from the realization that the untruth is untrue, even if the second is much more visible.
Ditto, and downvoting b1shop’s response since the quote did not mention any particular economic theory. Busts caused by widespread bad investments aren’t necessarily the problem, the widespread bad investments are the problem. Blaming the bust in these cases may be shooting the messenger.
Thats not to say all busts are largely caused by widespread bad investments, or anything about why these bad investments happen. It is however very clear in hindsight that many boom-phase investments are crazy.
I’m not downvoting Eugine, because Vaniver’s interpretation is interesting. But I am upvoting b1shop, because the quotation does sound like Austrianism on a bumper sticker. So it applause-lights a false fringe theory associated with an anti-empirical intellectual community, in addition to plausibly generating specific false beliefs about economics and/or ethics if taken on its face. (Busts, or more generally human misery, are the reason ‘distortions’ and ‘not making sense’ are a bad thing in the first place; economies aren’t primarily maps.) It’s interesting and revealing in subtle ways, but misleading in banal and obvious ways.
I’m not downvoting Eugine, because Vaniver’s interpretation matches mine. I am upvoting Grant and downvoting b1shop because he claims that there is no rationality message despite the rather obvious cognitive biases that it relates to.
I will refrain from actively supporting the quote because it uses “the real harm”, which makes it a strong statement about relative harms of various activities when that constitutes at best a controversial claim and one that is open to rather a lot of interpretation. (I would endorse an “also” claim or even a “and the most interesting” claim.)
I wouldn’t recommend downvoting b1shop’s response (I didn’t), because they are correct that the basic reading of the quote relies on particular economic assumptions. There are economic theories that put the fault in the bust- if things were intelligently managed, you could keep the bubble inflated at just the right amount to prevent it from popping or inflating further, and never have to deal with the bust.
For example, look at this graph that Krugman posted in 2010. The “projected real GDP” is from Mark Thoma, another economist, but where you choose to draw that line says a lot about your assumptions. The Austrian would basically draw it from trough to trough, claiming that all the reported GDP above that line was activity that could be recorded but didn’t actually generate lasting wealth. In that view, the bubbles are clearly harmful; in Krugman’s view, the busts are harmful. It’s the difference between a trillion dollars that we can never get back, and a trillion dollars that was never there.
Two things. First, a bubble that never deflates or pops is not a bubble, it’s sustainable growth.
Second, there is a LOT of empirical evidence that “intelligent management” of economy—which has been practiced since the first half of the XX century to various degrees in many countries—vastly underperforms its promises.
Agreed on both points. I’m not endorsing that theory, or related steelmanned versions.
It does assume that asset bubbles are made up of bad investments which are costly to undo. While this insight may have been originally Austrian, I didn’t think it was at all contentious. The dot-com bubble is a clearer example, as the housing bubble was both an asset bubble and banking failure (and many of the dot-com investments were just off-the-wall crazy).
As Vernon Smith showed, asset bubbles happen even with derivatives who’s value is objective (and without central banks). Its hard for me to see the bust as the problem in those cases.
Would a Keynesian say that any economic downturn can be averted in the face of any and all bad investments?
Doubtful. (I should make clear that I’m not a professional economist, and I couldn’t talk math with a Keynesian without doing serious reading first.) To go off the same graph, it does identify the tech bubble in ~2000 as being above the projected line.
My impression of the difference is that in the terms of a crude analogy, the Austrian prefers to rip the band-aid off, and the Keynesian prefers to slowly peel it back.
All true, but there are many booms which seem to produce crazy investments; the dot-com boom is the most obvious recent example. You don’t need to accept ABCT to accept this, and I’d guess most people who do notice this don’t accept ABCT.
Would you mind explaining? You could PM me or toss it in the Open thread if you don’t think it belongs here.
Sorry, haven’t logged in in a while.
I’m only an econ undergrad, so I’m not a drop-dead expert in economics. However, I work as a business valuator by day, so I like to think I know a thing or two about evaluating the profitability of projects.
There’s a lot of Rothbardian baggage about money I associate with the theory. That may or may not be a separate conversation. Don’t even bother trying to argue against my points here if you believe fractional reserve banking is bad, because we don’t agree on enough to have a productive conversation about this issue. We should instead focus on money and FRB first.
The ABCT story is about excessively low interest rates causing firms to be too farsighted in their planning. If rates increase, then projects that were profitable are no longer profitable, and the economy contracts.
Here’s a few reasons why I don’t like this story:
It requires a massive level of incompetence from entrepreneurs. Arguably the most popular business valuation resource for estimating costs of capital, Duff and Phelps, has a report on adjusting risk free rates for the expected future path. If businesses are unstable because they are not robust to 5% swings in interest rates, then they will likely be unstable due to other shocks as well. ABCT requires them to fall for the same trap over and over again.
It’s drastically asymmetric. ABCT only focuses on distortions caused by too much money being printed. What about the distortions caused by too little money being printed? Modern cases show this is far more damaging. The transmission mechanism isn’t based on interest rates, but it still matters a lot.
The case for expansionary policy causing bubbles is not as strong as many think. NGDP growth during the worst of the housing bubble was only 5%. That’s below average growth over the past few decades, which were a remarkably stable time. Yes, interest rates were low, but that had more to do with an influx of foreign savers than Fed policy. (Aside: Interest rates are a bad indicator of monetary policy. High interest rates during German hyperinflation is a great example.)
As far as the late 90′s go, yes, lots of bad investments were made. I think this was caused not by bad monetary economics but by irrational investor beliefs. I imagine people would still have invested in Pets.com regardless of Fed action or inaction. Monetary policy might explain excessive valuations everywhere, but it doesn’t explain excessive, localized valuations. Additionally, interest rates are mostly irrelevant to the tech sector where financing is usually based on equity rather than debt.
If the ABCT policy is true, we’d expect to see a bust in long-term schemes during recessions and a boom in short-term schemes. Instead, we see a bust in both.
ABCT seems married to the idea that expansionary monetary policy is “unsustainable” and interest rates must return to “natural” levels. This is nonsense. The Fed has been performing QE for years, and it’s been tremendously helpful by most accounts. Fed “inaction” is still action.
There’s not much empirical support for the theory.
Edit: Broken link.
The boom produces a lot of stuff which is theoretically not the optimum stuff to produce using the resources used in the boom. However, to the extent the boom brings resources out of the woodwork that may not have been used to produce anything at all in the absence of the boom, it may not actually be a net loss compared to a realistic counterfactual.
The bust accompanied by significant unemployment is of a virtual certainly producing less than any of the counterfactuals in which more people are employed. Of course it IS possible to employ some people digging holes and others to fill them in, but I think this is a strawman, generally artificially increased employment produces something of value.
The Austrians may have it wrong because the obviousness of the bust being the unproductive distortion is lost to them in the intellectual excitement of realizing you can’t have a bust without a boom, and so they mistakenly think it is the boom which is less productive.
Sometimes the obvious answer IS right. I think the fact that particularly intelligent people acting in groups miss this more often than is optimum should be one of the cognitive biases on our list of biases we study and stay aware of.
Unemployed people produce less than employed people. The odd construction of a corner case does not make this generally true statement generally false.
Hindsight bias. It’s only after the bust that you find out which boom things made sense after all and which didn’t.
“In theory, there is no difference between theory and practice. In practice, there is.”
Dupe.
“[W]hen you have eliminated the impossible, whatever remains, however improbable, must be the truth.”—Sherlock Holmes
Technically true. Some notable ‘improbable’ things that remain are the chance that you screwed up your thinking or measuring somewhere or that you are hallucinating. (I agree denotatively but are wary about the connotations.)
Agree with sibling qualifications, though note that I find this extremely useful as a Finding Lost Stuff heuristic, and by using it as a motto, have significantly decreased my instantiation of the literal streetlight effect.
“When you have updated on the evidence, whatever is the most probable, however socially unnacceptable, must be believed.”
“When you have updated on the evidence, whatever is the most probable, must be believed, even if it is uncontroversial, mundane, and doesn’t make startling conversation at parties.”
Duplicate (although correctly attributed this time).
I remember a response to this which goes something like - when you have eliminated the impossible, what remains may be more improbable than having made a mistake in one of your earlier impossibility proofs.
http://lesswrong.com/lw/3m/rationalist_fiction/2p6
Hazrat Inayat Khan
ibid.
But Naaman was wroth, and went away...And his servants came near, and spake unto him, and said, My father, if the prophet had bid thee do some great thing, wouldest thou not have done it? how much rather then, when he saith to thee, Wash, and be clean?
2 Kings 5: 11-13
Micah 6: 7-8
--multiple sutras
-- Iain M. Banks
I suppose I somewhat appreciate the sentiment. I note that labelling the killing ‘murder’ has already amounted to significant discretion. Killings that are approved of get to be labelled something nicer sounding.
Does this pay rent in policy changes? It seems probable that existing policy positions will already determine the contexts in which we might choose to apply this quote, so that the quote will only be generating the appearance of additional evidential weight, but will in fact result in double-counting if we use its applicability as evidence for or against a proposal, because we already chose to use the quote because we disagreed with the porposal. For example: ‘This imperialist intervention is wrong—Fuck every cause that ends in murder and children crying.’ Is the latter clause doing any work?
(First version of this comment:
Does this pay rent in suggested policies? It feels like under all plausible interpretations, it’s at best ‘I’m so righteous!’ and possibly other things.)
Yes. It rules out all sorts of policies, including good ones. It likely rules out murdering Hitler to prevent a war, especially if that requires killing guards in order to get to him.
Upvoted; wording was bad. Edited.
I agree entirely with your new wording. This quote seems to be the sort of claim to bring out conditionally against causes we oppose but conveniently ignore when we support the cause.
As much as I love Banks, this sounds like a massive set of applause lights, complete with sparkling Catherine wheels. Sometimes, you have to do shitty things to improve the world, and sometimes the shitty things are really shitty, because we’re not smart enough to find a better option fast enough to avoid the awful things resulting from not improving at all. “The perfect must not be the enemy of the good” and so on.
And sometimes you do shitty things because you think they will improve the world, but hey, even though the road to hell is very well-paved already, there’s always a place for another cobblestone...
The heuristic of this quote is that it is a firewall against a runaway utility function. If you convince yourself that something will generate gazillions of utilons, you’d be willing to pay a very high price to reach this, even though your estimates might be in error. This heuristic puts a cap on the price.
It’s good as an exhortation to build a Schelling fence, but without that sentiment, it’s pretty hollow. Reading the context, though, I agree with you: it’s a reminder that feeling really sure about something and being willing to sacrifice a lot of you and other (possibly unwilling) people to create a putative utopia probably means you’re wrong.
“Sorrow be damned, and all your plans. Fuck the faithful, fuck the committed, the dedicated, the true believers; fuck all the sure and certain people prepared to maim and kill whoever got in their way; fuck every cause that ended in murder and a child screaming. She turned and ran...”
(As an aside, I now have the perfect line for if I ever become an evil mastermind and someone quotes that at me: “But you see, murder and children screaming is only the beginning!”)
The problem is that there are better heuristics out there. Look up “just war theory” for starters.
This seems better-suited for MoreEmotional than LessWrong.
I think this is a useful heuristic because humans are just not good at calculating this stuff. Ethical Injunctions suggests that you do in fact check with your emotions when the numbers say something novel. (This is why I’m sceptical about deciding on numbers pulled out of your arse rather than pulling the decision directly out of your arse.)
I don’t think Banks even believed that, though. Several of his books certainly seem to be evidence to the contrary.
Is that both, or either or? Because if it is either or it may include such attrocities as going to bed on time and eating vegetables. If it is both, it seems to imply killing those not as beloved by children may be acceptable.
I wonder if people here realize how anti-utilitarianism this quote is :-)
“Murder and children crying” aren’t allowed to have negative weight in a utility function?
It’s not about weight, it’s about an absolute, discontinuous, hard limit—regardless of how many utilons you can pile up on the other end of the scale.
Well, no. It’s against the promise of how many utilons you can pile up on the other arm of the scale, which may well not pay off at all. I’m reminded of a post here at some point whose gist was “if your model tells you that your chances of being wrong are 3^^^3:1 against, it is more likely that your model is wrong than that you are right.”
Yes, but the quote in no way concerns itself with the probability that such a plan will go wrong; rather, it explicitly includes even those with a wide margin of error, including “every” plan which ends in murder and children crying.
If your plan ends in murder and children crying, what happens if your plan goes wrong?
The murder and children crying fail to occur in the intended quantity?
If your plan requires you to get into a car with your family, what happens if you crash?
Well, getting into a car with your family is not inherently bad, so it’s not a very good parallel… but if your overall point is that “expected value calculations do not retroactively lose mathematical validity because the world turned out a certain way”, then that’s definitely true.
I think that the “what if it all goes wrong” sort of comment is meant to trigger the response of “oh god… it was all for nothing! Nothing!!!”. Which is silly, of course. We murdered all those people and made those children cry for the expected value of the plan. Complaining that the expected value of an action is not equal to the actual value of the outcome is a pretty elementary mistake.
The features of my plan which mitigate the result of the plan going wrong kick in, and the damage is mitigated. I don’t go on vacation, despite the nonrefundable expenses incurred. The plan didn’t end in death and sadness, even if a particular implementation did.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.
For most people, murder and children crying are a bad outcome for a plan, but if they’re what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could “fail” and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren’t, then the planner would presumably have selected them as the desired plan outcome.
Or at least have the foresight to see that they have become likely and alter the plan such that it now results in utopia instead of murder.
I think we need to examine what we mean by ‘fail’.
A plan does not fail simply because the actual outcome is different from the outcome judged most likely; a plan fails when a contingency not prepared for occurs which prevents the intended outcome from being realized, or when an explicit failure state of the plan is reached.
If I plan to go on a vacation and prepare for a major illness by deciding that I will cancel the vacation, then experiencing a major illness might cause the plan to fail- because I have identified that as a failure state. The more important the object of the plan, the harder I will work in the planning stage to minimize the likelihood of ending up in a failure state. (When sending a probe to Mars, for example, I want to be prepared such that everything I can think of that might go wrong along the way still yields a success condition.)
It’s not a matter of “the plan might go wrong”, it’s a matter of “the plan might be wrong”, and the universal part comes from “no, really, yours too, because you aren’t remotely special.”
Seems like one of those rules that apply to humans but not to a perfect rationalist, then.
Sounds about right to me.
You seem to be implying that people here should care about things being anti-utilitarianism. They shouldn’t. Utilitarianism refers to a group of largely abhorrent and arbitrary value systems.
It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match the quote’s criteria for not being ‘Fucked’ are abhorrent.
It is not. “Murder and children crying” here are not means to an end, they are consequences as well. Maybe not intended consequences, maybe side effects (“collateral damage”), but still consequences.
I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. “murder and children crying”) be be unacceptable.
There is nothing about consequentialism which distinguishes means from ends. Anything that happens is an “end” of the series of actions which produced it, even if it is not a terminal step, even if it is not intended.
When wedrifid says that the quote is “anti-consequentialism”, they are saying that it refuses to weigh all of the consequences—including the good ones. The negativity of children made to cry does not obliterate the positivity of children prevented from crying, but rather must be weighed against it, to produce a sum which can be negative or positive.
To declare a consequence “unacceptable” is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value, as if it were infinitely negative and demanded some other method of valuation, which did not use such finicky things as numbers.
But even if there is a value which is negative, and 3^^^3 times greater in magnitude than any other value, positive or negative, its negation will always be of equal and opposite value, allowing things to be weighed against each other once again. In this example, a murder might be worth −3^^^3 utilons—but preventing two murders by committing one results in a net sum of +3^^^3 utilons.
The only possible world in which one could reject every possible cause which ends in murder or children crying is one in which it is conveniently impossible for such a cause to lead to positive consequences which outweigh the negative ones. And frankly, the world we live in is not so convenient as to divide itself perfectly into positive and negative acts in such a way.
Wikipedia: Consequentialism is the class of normative ethical theories holding that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness of that conduct. … Consequentialism is usually distinguished from deontological ethics (or deontology), in that deontology derives the rightness or wrongness of one’s conduct from the character of the behaviour itself rather than the outcomes of the conduct.
The “character of the behaviour” is means.
Consequentialism does not demand “computation of value”. It only says that what matters it outcomes, it does not require that the outcomes be comparable or summable. I don’t see that saying that certain outcomes are unacceptable, full stop (= have negative infinity value) contradicts consequentialism.
You have a point, there are means and ends. I was using the term “means” as synonymous with “methods used to achieve instrumental ends”, which I realize was vague and misleading. I suppose it would be better to say that consequentialism does not concern itself with means at all, and rather considers every outcome, including those which are the result of means, to be an end.
As for your other point, I’m afraid that I find it rather odd. Consequentialism does not need to be implemented as having implicitly summable values, much as rational assessment does not require the computation of exact probabilities, but any moral system must be able to implement comparisons of some kind. Even the simplest deontologies must be able to distinguish “good” from “bad” moral actions, even if all “good” actions are equal, and all “bad” actions likewise.
Without the ability to compare outcomes, there is no way to compare the goodness of choices and select a good plan of action, regardless of how one defines “good”. And if a given outcome has infinitely negative value, than its negation must have infinitely positive value—which means that the negation is just as desirable as the original outcome is undesirable.
Pardon me. I left off the technical qualifier for the sake of terseness. I have previously observed that all deontologial value systems can be emulated by (suitably contrived) consequentialist value systems and vice-versa so I certainly don’t intend to imply that it is impossible to construct a consequentialist morality implementing this particular injunction. Edited to fix.
Your point is perfectly valid, I think. Every action-guiding set of principles is ultimately all about consequences. Deontologies can be “consequentialized”, i.e. expressed only through a maximization (or minimization) rule of some goal-function, by a mere semantic transformation. The reason why this is rarely done is, I suspect, because people get confused by words, and perhaps also because consequentializing some deontologies makes it more obvious that the goals are arbitrary or silly.
The traditional distinction between consequentialism and non-consequentialism does not come down to the former only counting consequences—both do! The difference is rather about what sort of consequences count. Deontology also counts how consequences are brought about, that becomes part of the “consequences” that matter, part of whatever you’re trying to minimize. “Me murdering someone” gets a different weight than “someone else murdering someone”, which in turn gets a different weight from “letting someone else die through ‘natural causes’ when it could be easily prevented”.
And sometimes it gets even weirder, the doctrine of double effect for instance draws a morally significant line between a harmful consequence being necessary for the execution of your (well-intended) aim, or a “mere” foreseen—but still necessary(!) -- side-effect of it. So sometimes certain intentions, when acted upon, are flagged with negative value as well.
And as you note below, deontologies sometimes attribute infinite negative value to certain consequences.
That’s kind-of a good point, but I seriously doubt that that quote would be that effective in making people get it who don’t already.
I, too, support the cause of opposing every such cause.
This seems like a poor strategy by simply considering temper tantrums, let alone all of the other holes in this. (The first half of the comment though, I can at least appreciate.)
David Chapman thinks that using LW-style Bayesianism as a theory of epistemology (as opposed to just probability) lumps together too many types of uncertainty; to wit:
I think he is correct, and LWers are overselling Bayesianism as a solution to too many problems (at the very least, without having shown it to be).
I believe you are posting this in the wrong thread.
I do not see why any of Chapman’s examples cannot be given appropriate distributions and modeled in a Bayesian analysis just like anything else:
Dynamical chaos? Very statistically modelable, in fact, you can’t really deal with it at all without statistics, in areas like weather forecasting.
Inaccessibility? Very modelable; just a case of missing data & imputation. (I’m told that handling issues like censoring, truncation, rounding, or intervaling are considered one of the strengths of fully Bayesian methods and a good reason for using stuff like JAGS; in contrast, whenever I’ve tried to deal with one of those issues using regular maximum-likelihood approaches it has been… painful.)
Time-varying? Well, there’s only a huge section of statistics devoted to the topic of time-series and forecasts...
Sensing/measurement error? Trivial, in fact, one of the best cases for statistical adjustment (see psychometrics) and arguably dealing with measurement error is the origin of modern statistics (the first instances of least-squared coming from Gauss and other astronomers dealing with errors in astronomical measurement, and of course Laplace applied Bayesian methods to astronomy as well).
Model/abstraction error? See everything under the heading of ‘model checking’ and things like model-averaging; local favorite Bayesian statistician Andrew Gelman is very active in this area, no doubt he would be quite surprised to learn that he is misapplying Bayesian methods in that area.
One’s own cognitive/computational limitations? Not just beautifully handled by Bayesian methods + decision theory, but the former is actually offering insight into the former, for example “Burn-in, bias, and the rationality of anchoring”.
gwern, I am curious. You do a lot of practical data analysis. How often do you use non-Bayesian methods?
Pretty frequently (if you’ll pardon the pun). Almost all papers are written using non-Bayesian methods, people expect results in non-Bayesian terms, etc.
Besides that: I decided years ago (~2009) that as appealing as Bayesian approaches were to me, I should study ‘normal’ statistics & data analysis first—so I understood them and why I didn’t want to use them before I began studying Bayesian statistics. I didn’t want to wind up in a situation where I was some sort of Bayesian fanatic who could tell you how to do a Bayesian analysis but couldn’t explain what was wrong with the regular approach or why Bayesian approaches were better!
(I think I’m going to be switching gears relatively soon, though: I’m working with a track coach on modeling triple-jumping performance, and the smallness of the data suggests it’ll be a natural fit for a multilevel model using informative priors, which I’ll want to read Gelman’s textbook on, and that should be a good jumping off point.)
Random question—if you were to recommend a textbook or two, from frequentist and Bayesian analysis both, to a random interested undergraduate...
(As you might guess, not a hypothetical, unfortunately.)
Agreed about chaos, missing data, time series, and noise, but I think the next is off the mark:
He might be surprised to be described as applying Bayesian methods at all in that area. Model checking, in his view, is an essential part of “Bayesian data analysis”, but it is not itself carried out by Bayesian methods. The strictly Bayesian part—that is, the application of Bayes’ theorem—ends with the computation of the posterior distribution of the model parameters given the priors and the data. Model-checking must (he says) be undertaken by other means because the truth may not be in the support of the prior, a situation in which the strict Bayesian is lost. From “Philosophy and the practice of Bayesian statistics”, by Gelman and Shalizi (my emphasis):
...
If anyone’s itching to say “what about universal priors?”, Gelman and Shalizi say that in practice there is no such thing. The idealised picture of Bayesian practice, in which the prior density is non-zero everywhere, and successive models come into favour or pass out of favour by nothing more than updating from data by Bayes theorem, is, they say, unworkable.
They liken the process to Kuhnian paradigm-shifting:
but find Popperian hypothetico-deductivism a closer fit:
For Gelman and Shalizi, model checking is an essential part of Bayesian practice, not because it is a Bayesian process but because it is a necessarily non-Bayesian supplement to the strictly Bayesian part: Bayesian data analysis cannot proceed by Bayes alone. Bayes proposes; model-checking disposes.
I’m not a statistician and do not wish to take a view on this. But I believe I have accurately stated their view. The paper contains some references to other statisticians who, they says are more in favour of universal Bayesianism, but I have not read them.
Loath as I am to disagree with Gelman & Shalizi, I’m not convinced that the sort of model-checking they advocate such as posterior p-values are fundamentally and in principle non-Bayesian, rather than practical problems. I mostly agree with “Posterior predictive checks can and should be Bayesian: Comment on Gelman and Shalizi,‘Philosophy and the practice of Bayesian statistics’”, Kruschke 2013 - I don’t see why that sort of procedure cannot be subsumed with more flexible and general models in an ensemble approach, and poor fits of particular parametric models found automatically and posterior shifted to more complex but better fitting models. If we fit one model and find that it is a bad model, then the root problem was that we were only looking at one model when we knew that there were many other models but out of laziness or limited computations we discarded them all. You might say that when we do an informal posterior predictive check, what we are doing is a Bayesian model comparison of one or two explicit models with the models generated by a large multi-layer network of sigmoids (specifically <80 billion of them)… If you’re running into problems because your model-space is too narrow—expand it! Models should be able to grow (this is a common feature of Bayesian nonparametrics).
This may be hard in practice, but then it’s just another example of how we must compromise our ideals because of our limits, not a fundamental limitation on a theory or paradigm.
Expanding further on my previous reply, I believe that the claimed (by Gelman and Shalizi) non-Bayesian nature of model-checking is wrong: the truth is that everything that goes under the name of model-checking works, to the extent that it does, so far as it approximates the underlying Bayesian structure. It is not called Bayesian, because it is not an actual, numerical use of Bayes theorem, and the reason we are not doing that is because we do not know how: in practice we cannot work with universal priors.
So Bayesian ideas are applicable to the problem of model/abstraction error, but we cannot apply them numerically. In fact, that is pretty much what model/abstraction error means—if we did have numbers, they would be part of the model. Model checking is what we do when we cannot calculate any further with numerical probabilities.
Cf. my analogy here with understanding thermodynamics.
I believe that would be Eliezer’s response to Gelman and Shalizi. I would not expect them to be convinced though. Shalizi would probably dismiss the idea as moonshine and absurdity.
ETA: Eliezer on the subject:
ETA: Why is the grandparent at −4? David Chapman and simplicio may be wrong about this, but neither are saying anything stupid, or so much thrashed out in the past as to not merit further words.
Judging by the abstract I assume you meant to write, the latter is offering insight into the former?
Unless there’s been an enormous breakthrough in the past 2 years, I believe this is still a major unsolved problem. Also decision theory is about cooperating with other agents, not overcoming cognitive limitations.
Note that I was speaking of “Bayesianism” as practiced on LW, not of Bayesian statistics the academic field. I do not believe these are the same.
I believe Chapman is writing a more detailed critique of what he sees here; I will be sure to link you to it when it comes.
I think that’s absurd if that’s what he really means. Just because we are not daily posting new research papers employing model-averaging or non-parametric Bayesian statistics does not mean that we do not think those techniques are useful and incorporated in our epistemology or that we would consider the standard answers correct, and this argument can be applied to any area of knowledge that LWers might draw upon or consider correct. If we criticize p-values as a form of building knowledge, is that not a part of ‘Bayesian epistemology’ because we are drawing arguments from Jaynes or Ioannidis and did not invent them ab initio?
‘Your physics can’t deal with modeling subatomic interactions, and so sadly your entire epistemology is erroneous.’ ‘??? There’s a huge and extremely successful area of physics devoted to that, and I have no freaking idea what you are talking about. Are you really as ignorant and superficial as you sound like, in listing as a weakness something which is actually a major strength of the physics viewpoint?’ ‘Oh, but I meant physics as practiced on LessWrong! Clearly that other physics is simply not relevant. Come back when LW has built its own LHC and replicated all the standard results in the field, and then I’ll admit that particle physics as practiced on LW is the same thing as particle physics the academic field, because otherwise I refuse to believe they can be the same.’
I think you’re not being charitable again. Consider the difference between physics as practiced by quantum woo mystics, and physics as practiced by physicists or even engineers. I think that simplicio is referring to a similar (though less striking) tendency for the representative LWer to quasi-religiously misapply and oversell probability theory (which may or may not be the case, but should be argued with something other than uncharitable ridicule).
I think you may be extrapolating much too far from the quote I posted. Also, my statistics level is well below both yours and Chapman’s so I am not a good interlocutor for you.
I don’t think I am. It’s a very simple quote: “here is a list of n items Bayesian statistics and hence epistemology cannot handle; therefore, it cannot be right.” And it’s dead wrong because all n items are handled just fine.
I think you are being uncharitable. The list was of different types of uncertainty that Bayesians treat as the same, with a side of skepticism that they should be handled the same, not things you can’t model with bayesian epistemology.
The question is not whether Bayes can handle those different types of uncertainty, it’s whether they should be handled by a unified probability theory.
I think the position that we shouldn’t (or don’t yet) have a unified uncertainty model is wrong, but I don’t think it’s so stupid as to be worth getting heated about and being uncivil.
Did somebody solve the problem of logical uncertainty while I wasn’t looking?
I disagree that Gwern is being uncivil. I don’t think Chapman has any ground to criticize LW-style epistemology when he’s made it abundantly clear he has no idea what it is supposed to be. (Indeed, that’s his principal criticism: the people he’s talked to about it tell him different things.)
It’d be like if Berkeley asked a bunch of Weierstrass’ first students about their “supposed” fix for infinitesimals. Because the students hadn’t completely grasped it yet, they gave Berkeley a rope, a rubber hose, and a burlap sack instead of giving him the elephant. Then Berkeley goes and writes a sequel to the Analyst disparaging this “new Calculus” for being incoherent.
In that world, I think Berkeley’s the one being uncivil.
Covril, The Wheel of Time
Is that true (for trees or people)?
Edit: For one example, this person currently linked in the sidebar isn’t sure.
If this quote were about people improving through adversity I wouldn’t have posted it (I also read that article). But I think it’s true for arguments. The last sentence does a better job of fitting the character than illuminating the point so I could have left it out.
Do arguments themselves “improve”, rather than simply being right or wrong?
Maybe, since arguments have component parts that can be individually right or wrong; or maybe not, since chains of reasoning rely on every single link; or maybe, since my argument improves (along with my beliefs) as I toss out and replace the old one.
Come to think of it, if “trees grow roots most strongly when wind blows through them” because the trees with weak roots can’t survive in those conditions then this would make a very bad metaphor for people.
No, it’s probably accurate as stated. I don’t know about trees as such, but if you try to start vegetable seedlings indoors and then transfer them outside, they’ll often die in the first major wind; the solution is to get the air around them moving while they’re still indoors (as with a fan), which causes them to devote resources to growing stronger root systems and stems.
Ernest Rutherford
That sounds like a ridiculous thing to say and I can’t really steelman it.
Do you have a reliable source for this quote? The Wikipedia talk page for the Rutherford article contains this exchange:
The quote itself, while still on the page, references this site which is an unsourced quote collection.
OK, maybe the quote isn’t legit, but after all quite a lot of our favorite quotes are misquotations—that’s not the point. It’s an interesting thought even if no Nobel laureate ever said it. Is it ridiculous? It makes a lot of sense to me.
It’s ridiculous if taken literally as a universal prior or bound, because it’s very easy to contrive situations in which refusing to give probabilities below 1/10^12 lets you be dutch-booked or otherwise screw up—for example,
log2(10^12)
is 40, so if I flip a fair coin 50 times, say, and ask you to bet on every possible sequence.… (Or simply consider how many operations your CPU does every minute, and consider being asked “what are the odds your CPU will screw up an operation this minute?” You would be in the strange situation of believing that your computer is doomed even as it continues to run fine.)But it’s much more reasonable if you consider it as applying only to high-level theories or conclusions of long arguments which have not been highly mechanized; I discuss this in http://www.gwern.net/The%20Existential%20Risk%20of%20Mathematical%20Error and see particularly the link to “Probing the Improbable”.
Yes, that’s how I read it. Obviously it doesn’t literally mean you can’t be very sure about anything; the message is that science is wrong very often and you shouldn’t bet too much on the latest theory. So even if it’s a complete misquote, it’s a nice thought.
In addition to gwern’s reply, if you read it as 10-to-1 to 12-to-1 odds, or even 1012-to-1 odds, and not 10^12-to-1 odds, then obviously there are lots of physical theories that deal with events that are less likely than 1/1012. And lots of experiments whose outcome people are more than 1012-to-1 sure about, and they are right to be so sure.
You quoted the most ridiculous figure, that of 10-to-1 or 12-to-1. I’m quite legitimately more than 12-to-1 sure about some things in physics, and I’m not even a physicist! The Wikipedia talk quote makes the point that all three possible quotes are to be found on the internet.
Brandon Smith
-- Peter F. Drucker, A Functioning Society
No, not all sound is communication. No, you aren’t communicating just by listening and understanding. To communicate is to send a message and have it received.
What’s the context of this paragraph?
This seems to contradict one of the main Sequences here, namely A Human’s Guide To Words. Specifically Taboo Your Words (which even uses the tree in the forest example). This is probably why it was downvoted.
Sean
Ok, I purchased my mansion and a sportscar. What’s step 2?
You might be interested in borrowing a copy of The Millionaire Next Door from a library. It’s a bit more accurate about rich people than television.
Does this strike you as cargo cult language?
That’s not cargo cult language, it’s just ordinary cargo cult behavior.
Is there any reason to believe this is true? I would guess Judith Harris would say no, and she’s spent a lot more time thinking about this than I have.
Well, it’s clear that propensity to acquire wealth isn’t purely genetic.
First off, it seems like wealth is a zero-sum game. If I make money then someone somewhere else is losing money. They may be getting something of equivalent value in exchange, or they might just be getting screwed—it really doesn’t matter.
You could probably argue that most people want to be richer, right? I mean, if you asked a random segment of the population, you’d probably get a conservative 80% of them to say “yes, I’d like to have more money.”
If all these people followed the rule in this quote, then this would be the problem: what’s the outcome when two or more people who have identical goals (and methods of obtaining those goals) play each other in a zero-sum game?
I’m not sure that it would end well for the majority of people.
Wealth is very clearly NOT a zero-sum game.
Wealth isn’t about money, it’s about value. Ask yourself: if you create value, is someone somewhere else destroying value?
Money (in this context) is just a unit of account. Any central bank can produce an unlimited amount of these.
Wealth isn’t a zero-sum game unless there is no economic growth; people getting richer in non-zero-sum ways is ( as i understand it) what economic growth is.
Justin Peters in Slate.com in an article about Aaron Swartz.
People don’t have that right, in general. Except in the technical ‘might makes right’ sense employed by some tyrants.
The implied claim seems to be that it is more morally acceptable to disrespect individuals who kill themselves (beyond the specific criticism of the particular decision). I have nothing but contempt for that claim and so obviously don’t consider it to belong in this thread.
Uh… what does this mean, and how is it a rationality quote?
I mean, when you kill yourself, you become dead, and thus unable to do anything, including, but not limited to, “control your own story”. But that’s a trivial fact.
Is the quote meant to say something less trivial?
Part of the cost of suicide is that you will have less control over how people remember you than if you had lived longer. The word “forfeit” implies that we should feel no obligation to respect the memory of people who kill themselves.
When someone dies we might feel a moral obligation to follow their wishes. Suicide, when the person was not in great pain, should nullify any such feeling.
Well, in that case: that’s dumb.
It says more than that. It implies that there is an obligation to respect the memory of people and that said obligation no longer applies if they kill themselves.
Vox Day
Why not ? If I broke it, there’s a chance that I know exactly what I did. The next version of whatever it is I broke should eliminate that failure mode.
-Unknown
“No”, and “false”, respectively.
If something is mine or otherwise under my influence then my opinion on how it should be fixed shall determine my action. If, all things considered, I believe it will achieve my ends to have someone else fix it according to their abilities or expertise then I’ll go ahead and do that. I’ll also choose who to listen to according to my own judgement. Initial success at some practical activity represents some evidence about the likely usefulness of their word but it is far from definitive.
Incidentally, the blog post that represents the context of this quote seems to be ridiculous, gratuitous sexism.