Rationality Quotes September 2011
Here’s the new thread for posting quotes, with the usual rules: Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.) Do not quote yourself. Do not quote comments/posts on LW/OB. No more than 5 quotes per person per monthly thread, please.
- 25 Apr 2012 17:37 UTC; 25 points) 's comment on Intelligence as a bad by (
- 14 May 2012 17:02 UTC; 19 points) 's comment on [LINK] International variation in IQ – the role of parasites by (
- 19 Nov 2012 17:54 UTC; 14 points) 's comment on How minimal is our intelligence? by (
Megan McArdle
Related SMBC.
reminds me of:
“I know that you believe you understand what you think I said, but I’m not sure you realize that what you heard is not what I meant.” —Robert McCloskey
Oh, how very true.
--Scott Aaronson
Interesting! Examples?
The whole link is basically a tissue of suggested examples by Aaronson and commenters.
I like that quote, but the rest of the article seems to be just restating obvious collective action problems. Not sure where he gets the “Whole ideaologies have been built around ignoring these” bit.
Everyone doing nothing in a collective action problem is a Nash equilibrium, I believe.
Most of the relevant ideologies in question are ideologies that try to avoid this problem in economic contexts.
-- Russian proverb
I’m Russian, and I don’t think I’ve heard this proverb before. What does it sound like in Russian ? Just curious.
It’s a rather lousy translation of the proverb, the more close variant of which than that above is mentioned in Vladimir Dahl’s famous collection of russian proverbs: Церковь близко, да ходить склизко, а кабак далеконько, да хожу потихоньку.
Can you provide a better translation?
Ahh, yes, thank you ! I didn’t even recognize the proverb in English, but I doubt that I myself could translate it any better...
http://masterrussian.net/f13/old-russian-proverb-10675/
I’m not sure. I came across it in translated form without sourcing.
.
There is actually a pre-split thread about this essay on Overcoming Bias, and the notion of “Keep Your Identity Small” has come up repeatedly since then.
And of course “Cached Selves”, and especially this comment on that post.
-- Lewis Carrol, “Alice’s Adventures in Wonderland”
Hard to believe that it hasn’t show up here before…
Gary Marcus, Kluge
Relevant to deathism and many other things
-Richard Feynman
-Joseph A. Schumpeter, Capitalism, Socialism, and Democracy
In other words, politics is the mind killer.
I think it may be wiser to say “policy is the mind killer”; it emphasizes the cross-institutional cross-scale pervasive nature of political thinking.
— John Derbyshire
Douglas Kenrick
(Retracted because I don’t find the point significant enough to argue.)
-- Turkish proverb
Only if the road goes exactly the wrong way, which is unlikely. But I must admit “No matter how far you’ve gone down the wrong road, turn down whatever road is the best road now” doesn’t sound quite as catchy. ;)
-Steve Jobs, [Wired, February 1996]
It’s still an open question how well the networks succeed at giving people what they want. We still see, for instance, Hollywood routinely spending $100 million on a science fiction film written and directed by people who know nothing about science or science fiction, over 40 years after the success of Star Trek proved that the key to a successful science fiction show is hiring professional science fiction writers to write the scripts.
I don’t think knowing about science had much to do with the success of Star Trek. You’re probably right about the professional science fiction writers, though. Did they stop using professional sf writers for the third season?
In general, does having professional science fiction writers reliably contribute to the success of movies?
A data point which may not point in any particular direction: I was delighted by Gattaca and The Truman Show—even if I had specific nitpicks with them [1] because they seemed like Golden Age [2] science fiction. When composing this reply, I found that they were both written by Andrew Niccol, and I don’t think a professional science fiction writer could have done better. Gattaca did badly (though it got critical acclaim), The Truman Show did well.
[1] It was actually at least as irresponsible as it was heroic for the main character in Gattaca to sneak into a space project he was medically unfit for.
I don’t think Truman’s fans would have dropped him so easily. And I would rather have seen a movie with Truman’s story compressed into the first 15 minutes, and the main part of the movie being about his learning to live in the larger world.
[2] I think the specific Golden Age quality I was seeing was using stories to explore single clear ideas.
I disagree. As I see it, The Truman Show is, at its core, a Gnostic parable similar to The Matrix, but better executed. It follows the protagonist’s journey of discovery, as he begins to get hints about the true nature of reality; namely, that the world he thought of as “real” is, in fact, a prison of illusion. In the end, he is able to break through the illusion, confront its creator, and reject his offer of a comfortable life inside the illusory world, in favor of the much less comfortable yet fully real world outside.
In this parable, the Truman Show dome stands for our current world (which, according to Gnostics, is a corrupt illusion); Christoff stands for the Demiurge; and the real world outside stands for the true world of perfect forms / pure Gnosis / whatever which can only be reached by attaining enlightenment (for lack of a better term). Thus, it makes perfect sense that we don’t get to see Truman’s adventures in the real world—they remain hidden from the viewer, just as the true Gnostic world is hidden from us. In order to overcome the illusion, Truman must led go of some of his most cherished beliefs, and with them discard his limitations.
IMO, the interesting thing about The Truman Show is not Truman’s adventures, but his journey of discovery and self-discovery. Sure, we know that his world is a TV set, but he doesn’t (at first, that is). I think the movie does a very good job of presenting the intellectual and emotional challenges involved in that kind of discovery. Truman isn’t some sort of a cliched uber-hero like Neo; instead, he’s just an ordinary guy. Letting go of his biases, and his attachments to people who were close to him (or so he thought) involves a great personal cost for Truman—which, surprisingly, Jim Carrey is actually able to portray quite well.
Sure, it might be fun to watch Truman run around in the real world, blundering into things and having adventures, but IMO it wouldn’t be as interesting or thought-provoking—even accounting for the fact that Gnosticism is, in fact, not very likely to be true.
Your essay fails to account for the deep philosophical metaphors of guns, leather, gratuitous exaggerated action and nerds doing kung fu because of their non-comformist magic.
With apologies to Freud, sometimes a leather-clad femme fatale doing kung fu is just a leather-clad femme fatale doing kung fu :-)
That’s kind of the point. A leather-clad femme fatale doing kung fu probably isn’t a costar in an ‘inferior execution of a Gnostic parable’. She’s probably a costar in a entertaining nerd targeted action flick.
In general it is a mistake to ascribe motives or purpose (Gnostic parable) to something and judge it according to how well it achieves that purpose (inferior execution) when it could be considered more successful by other plausible purposes.
Another thing the Matrix wouldn’t be a good execution of, if that is what it were, is a vaguely internally coherent counterfactual reality even at the scene level. FFS Trinity, if you pointed a gun at my head and said ‘Dodge This!’ then I’d be able to dodge it without any Agent powers. Yes, this paragraph is a rather loosely related tangent but damn. The ‘batteries’ thing gets a bad rap but I can suspend my disbelief on that if I try. Two second head start on your ‘surprise attack’ to people who can already dodge bullets is inexcusable.
I did not mean to give the impression that I judged The Truman Show or The Matrix solely based on how well they managed to convey the key principles of Gnosticism. I don’t even know if their respective creators intended to convey anything about Gnosticism at all (not that it matters, really).
Still, Gnostic themes (as well Christian ones, obviously) do feature strongly in these movies; more so in The Truman Show than The Matrix. What I find interesting about The Truman Show is not merely the fact that it has some religious theme or other, but the fact that it portrays a person’s intellectual and emotional journey of discovery and self-discovery, and does so (IMO) well. Sure, you could achieve this using some other setting, but the whole Gnostic set up works well because it maximizes Truman’s cognitive dissonance. There’s almost nothing that he can rely on—not his senses, not his friends, and not even his own mind in some cases—and he doesn’t even have any convenient superpowers to fall back on. He isn’t some Chosen One foretold in prophecy, he’s just an ordinary guy. This creates a very real struggle which The Matrix lacks, especially toward the end.
AFAIK, in the original script the AIs were exploiting humans not for energy, but for the computing capacity in their brains. This was changed by the producers because viewers are morons .
This is why I’m so glad the creators realized they had pushed their premise as far as they were capable and quit while they were ahead, never making a sequel.
I’m pretty sure that one of the Wachowski brothers talked about the deliberate Gnostic themes of The Matrix in an interview, but as for The Truman Show I have no idea.
I have many times heard fans say this. Not once have any produced any evidence. Can you do so?
The only evidence I have is that it’s so obviously the way the story should be. That’s good enough for me. It does not matter precisely what fallen demiurge corrupted the parable away from its original perfection.
ETA: Just to clarify, I mean that as far as I’m concerned, brains used as computing substrate is the real story, even if it never crossed the Wachowskis’ minds. Just like some people say there was never a sequel (although personally I didn’t have a problem with it).
And like any urban legend, that is why this explanation spreads so easily.
Is not the alternative plot as faulted as the original plot, insofar as if the brainy computing substrate is used for something other than to run the originial software (humans) there is are no need to actually simulate a matrix?
Not only that, but I’m pretty sure building an interface that’d let you run arbitrary software on a human brain would be at least as hard and resource-intensive as building an artificial brain. We reach the useful limits of this kind of speculation pretty quickly, though; the films aren’t supposed to be hard sci-fi.
You just need to stipulate that the brain can’t stay healthy enough to do that without running a person.
But I’m not much interested in retconning a parable into hard science.
According to IMDB,
So, I guess the answer is “probably not”. Sorry.
But… but… TVTropes says it!
Damnit, I’ve been saying that too, and now I realize I’m not sure why I believe it. Ah well, updating is good.
Inexcusable? :cracks knuckles:
Try to see it from the perspective of the agent. With how close that gun was to his head, and assuming that Trinity was not in fact completely stupid and had the training and hacker-enhanced reflexes to fire as soon as she saw the merest twitch of movement, there was really no realistic scenario where that agent could survive. A human might try to dodge anyway, and die, but for an agent, two seconds spent taunting him was two seconds delay. A miniscule difference in outcome, but still—U(let trinity taunt) > U(try to dodge and die immediately).
Yes, where the meaning of ‘inexcusable’ is not ‘someone can say words attempting to get out of it’ but instead ‘no excuse can be presented that the speaker or, by insinuation, any informed and reasonable person would accept’.
No, no realistic scenario. But in the scenario that assumes the particular science fiction question premises that define ‘agent’ in this context all reasonable scenarios result in trinity dead if she attempts that showmanship. The speed and reaction time demonstrated by the agents is such that they dodge, easily. Trinity still operates on human hardware.
I remind you that these agents were designed to let the One win, else they should have gone gnome-with-a-wand-of-death on all these people.
Isn’t that disproved by paid-for networks, like HBO? And what about non-US broadcasters like the BBC?
The reason companies like HBO can do a different sort of tv is that they don’t have to worry about ratings—they’re less bound by how many watch each show.
He was the guy who thought that people were too dumb to operate a two-button mouse. It’s not that the networks conspired to dumb us down, and it’s not that people want something exactly this dumb, but it’s that those folks in control at the networks, much like Jobs himself, tend to make systematic errors such as believing themselves to be higher above the masses than is actually the case. Sometimes that helps to counter the invalid belief that people will really want to waste a lot of effort on your creation.
And many of his other simplifications were complete successes and why he died a universally-beloved & beatified billionaire.
Seems like a bit of an exaggeration. Almost universally respected, sure.
Yep. Respected and admired at a distance, certainly. But a lot of people who knew him personally tend to describe him as a manipulative jerk.
Which has little to do with how he & his simplifications were remembered by scores of millions of Americans. Don’t you remember when he died, all the news coverage and blog posts and comments? It made me sick.
Meh, I thought of him as a brilliant but heavy-handed and condescending jerk long before I heard of his health problems. I refused to help my family and friends with iTunes (bad for my blood pressure) and anything Mac. My line was: if it “just works” for you, great, if not, you are SOL. Your iPod does not sync? Sorry, I don’t want to hear about any device that does not allow straight file copying.
Heh. I have been known to engage in “What do you mean you are having problems? That’s impossible, there’s the Apple guarantee It Just Works (tm) (r) ” :-D
Actually, no, I don’t remember because I didn’t read them. I’m particular about the the kind of pollution I allow to contaminate my mind :-)
Anyway, we seem to agree. One of the interesting things about Jobs was the distance between his private self and his public mask and public image.
I am too, but I pay attention to media coverage to understand what the general population thinks so I don’t get too trapped in my high-tech high-IQ bubble and wind up saying deeply wrong things like private_messaging’s claim that “Jobs’s one-button mice failed so ordinary people really are smart!”
Yeah, that’s so totally what I claimed. Not. My point is that a lot of people overestimate how much smarter they are than ordinary people, and so they think ordinary people a lot dumber than ordinary people really are.
Also, the networks operate under the assumption that less intelligent people are more influenced by advertising, and therefore, the content is not even geared at the average joe, but at the below-average joe.
Free free to elaborate how your one-button mouse example and all Jobs’s other successes match what you are claiming here about Jobs being a person who underestimated ordinary people’s intelligence. (If Jobs went broke underestimating ordinary people’s intelligence, then may heaven send me a comparable bankruptcy as soon as possible.)
The original quote itself is a fairly good example—he assumes that the networks produce something which is exactly what people want, whereas the networks should, ideally, produce something which the people most influenced by the advertising want; a different, less intelligent demographic. If he was speaking truth in the quote, he had to have underestimated intelligence of the average people.
Secondarily, if you want to instead argue from the success, you need to outline how and why underestimation of intelligence would be inconsistent with the success. Clearly, all around more complicated user interfaces also enjoyed huge success. I even give an explanation in my comment—people also tend to massively over-estimate the willingness of users to waste cognitive effort on their creations.
As for what lessons we can learn from it, it is perhaps that underestimating the intelligence is relatively safe for a business, albeit many failed startups began from a failure to properly explore the reasons why an apparent opportunity exists, instead explaining it with the general stupidity of others.
edit: also, you could likewise wish for a comparable bankruptcy to some highly successful but rather overcomplicated operating system.
Why’s that? Why aren’t the networks making most profit by appealing to as many people as possible because that increase in revenue outweighs the additional advertising price increase made possible by narrowly appealing to the stupidest demographic? And why might the stupidest demographic be the most profitable, as opposed to advertising to the smartest and richest demographics? 1% of a million loaves is a lot better than 100% of one hundred loaves.
So you’re making at least two highly questionable economics arguments here, neither of which I accept.
Apple’s success is, from the original Mac on, frequently attributed to simplification and improving UIs. How is this not consistent with correctly estimating the intelligence of people to be low?
You’re absolutely right about this part. And this pervasive overestimation is one of the reasons that ‘worse is better’ and Engelbart died not a billionaire, and Engelbart’s beloved tiling window managers & chording keyboards are unfamiliar even to uber-geeks like us, and why so many brilliant techies watch other people make fortunes off their work. Because, among their other faults, they vastly overestimate how capable ordinary people and users are of using their products.
If one deliberately attempts to underestimate the intelligence of users, one may make less of a mistake than usual.
Seen any TV ads lately? I’m kind of wondering if you’re intending to win here by making an example.
Since you’re on to the markers of real world success, how does your income compare to the median for people of your age, race, sex, and economical status of parents, anyway?
I don’t think making fortune is that much about not overestimating other people. Here’s the typical profile of a completely failed start-up founder: someone with a high narcissism score—massive over-estimate of their own intelligence, massive under-estimating of other people’s intelligence all across the board. Plus when they fail, it typically culminates in a conclusion that everyone’s stupider.
edit: also with regards to techies watching others walk away with their money, there’s things like this Atari story
There’s a lot of cases of businesspeople getting more money, when the products are not user interfaces at all, but messy internals. Tesla and Edison are another story—Edison blew so much money on thinking that other people are stupid enough to be swayed enough by the electrocution of the elephant. He still made more money, of course, because he had the relevant money making talents. And Tesla’s poor business ability (still well above average) can hardly be blamed on people being too stupid to deal with complex things that happen in enclosed boxes.
Yes. Ads vary widely in the target audience, ranging from the utter lowest-common denominator to subtle parodies and references, across all sorts of channels. The ads you see on Disney are different from the ads you see on Fox News which are different from the ads you see on Cartoon Network’s Adult Swim block, which are different from the ones on the Discover channel. Exactly opposite of your crude ‘ads exist only to exploit stupid people’ model.
Below-average, and my own website is routinely criticized by readers for being too abstract, having a bad UI, and making no compromises or helping out readers.
Oh, I’m sorry—was I supposed to not prove the point about geeks like me usually overestimating the intelligence of ordinary people? It appears I commit the same sins. Steve Jobs would not approve of my design choices, and he would be entirely correct.
And what does this have to do with Steve Jobs? Please try to stay on topic. I’m defending a simple point here: Steve Jobs correctly estimated the intelligence of people as low, designed UIs to be as simple, intuitive, and easy to use, and this is a factor in why he died a billionaire. What does his narcissism have to do with this?
As I recall the history, this had nothing to do with UIs or people’s intelligence, but with Edison being in a losing position, having failed to invent or patent the superior alternating current technologies that Tesla did, and desperately trying anything he could to beat AC. Since this had nothing to do with UIs, all it shows is that one PR stunt was insufficient to dig Edison out of his deep hole. Which is not surprising; PR can be a powerful force, but it is far from omnipotent.
Thinking that average people’s intelligence is low != thinking every PR stunt ever, no matter how crackbrained, must instantly succeed and dig someone out of any hole no matter how deep.
I’d be astonished if resistance to advertising increases linearly or better with IQ once you control for viewing time. Marketing’s basically applied cognitive science, and one of the major lessons of the heuristics-and-biases field is that it’s really hard to outsmart our biases.
Why do you think you should control for the viewing time? As a marketer, it makes no difference for you why the higher IQs are less influenced. Furthermore a lot of advertising relies on outright lying.
Because I’d expect high-IQ populations to consume less media than the mean not thanks to anything intrinsic to IQ but because there’s less media out there targeting them, and that’s already factored into producers’ and advertisers’ expectations of audience size.
Similar considerations should come into play on the low end of the distribution: the IQ 80 cohort is roughly the same size as the IQ 120 and with less disposable income, both of which should make it less attractive for marketing. Free time might have an impact, but aside from stereotype I don’t know if the lifestyles of the low-IQ lend themselves to more or less free time than those of the high-IQ; I can think of arguments for both.
Exposure to marketing tactics might also build resistance to them, and I’d expect that to be proportional in part to media exposure.
“No one in this world, so far as I know-and I have searched the record for years, and employed agents to help me-has ever lost money by underestimating the intelligence of the great masses of the plain people.”—H.L.Mencken
I think this is happening with Hollywood, but that would be a longer story.
I’d be interested in hearing the longer story… It seems to me that Hollywood is doing very well with a low estimate of average intelligence.
I think there’s a great many apes that under-estimated the intelligence of a tiger or a bear, and haven’t contributed to our gene pool. There’s also all those wars where underestimations of the intelligence of enemy masses cost someone great deal of money and, at times, their own life.
My parents are incapable of using the context menu in any way.
Jobs may have been on to something.
Forcing everyone to the lowest common denominator hardly counts as “onto something”.
Fictional polemical evidence is not an argument; see my reply to private_messaging.
Did he say this, or are you inferring it from his having designed a one-button mouse?
Having two incorrect beliefs that counter each other (thinking that people want to spend time on your creation but are less intelligent than they actually are) could result in good designs, but so could making neither mistake. I’d expect any decent UI designer to understand that the user shouldn’t need to pay attention to the design, and/or that users will sometimes be tired, impatient or distracted even if they’re not stupid.
I recall reading that he tried 3 button mouse, didn’t like it, said it was too complicated, and gone for an one button one. Further down the road they need the difficult-to-teach alternate-click functionality and implemented it with option-click rather than an extra button. Apple stuck with one button mouse until 2005 or so, when it jumped to 4 programmable buttons and a scrollball.
The inventor of the mouse and of many aspects of the user interface, Douglas Engelbart, gone for 3 buttons and is reported on wikipedia as stating he’d put 5 if he had enough space for the switches.
I can’t find a citation, but the rationale I’ve heard is to make it easier to learn how to use a Macintosh (or a Lisa) by watching someone else use one.
I did dial-up tech support in 1999-2000. Lots of general consumers who’d just got on this “internet” thing and had no idea what they were doing. It was SO HARD to explain right-clicking to them. Steve Jobs was right: more than one mouse button confuses people.
What happened, however, is that Mosaic and Netscape were written for X11 and then for Windows. So the Web pretty much required a second mouse button. Eventually Apple gave up and went with it.
(The important thing about computers is that they are still stupid, too hard to use and don’t work. I speak as a professional here.)
And for this we can be eternally grateful. While one button may be simple, two buttons is a whole heap more efficient. Or five buttons and some wheels.
I don’t object to Steve Jobs (or rather those like him) making feature sparse products targeted to a lowest common denominator audience. I’m just glad there are alternatives to go with that are less rigidly condescending.
But did you deal with explaining option-clicking? The problem is that you get to see the customers who didn’t get the press the right button on the mouse rather than the left. Its sort of like dealing with customer responses, you have, say, 1% failure rate but by feedback it looks like you have 50%..90% failure rate.
Then, of course, Apple also came up with these miracles of design such as double click (launch) vs slow double click (rename). And while the right-click is a matter of explanation—put your hand there so and so, press with your middle finger—the double clicking behaviour is a matter of learning a fine motor skill, i.e. older people have a lot of trouble.
edit: what percentage of people do you think could not get right clicking? And did you have to deal with one-button users who must option-click?
This was 1999, Mac OS9 as it was didn’t really have option-clicking then.
I wouldn’t estimate a percentage, but basically we had 10% Mac users and 2% of our calls came from said Mac users.
It is possible that in 2013 people have been beaten into understanding right-clicking … but it strikes me as more likely those people are using phones and iPads instead. The kids may get taught right-clicking at school.
I remember classic Mac OS . One application could make everything fail due to lack of real process boundaries. It literally relied on how people are amazingly able to adapt to things like this and avoid doing what causes a crash (something which I notice a lot when I start using a new application), albeit not by deliberate design.
edit: ahh, it had ctrl-click back then: http://www.macwrite.com/beyond-basics/contextual-menus-mac-os-x (describes how ones in OS X differ from ones they had since OS 8)
Key quote:
What I like about 2 buttons is that it is discoverable. I.e. you go like, ohh, there’s two buttons here, what will happen if I press the other one?
Now that you mention it, I remember discovering command-click menus in OS 9 and being surprised. (In some apps, particularly web browsers, they would also appear if you held the mouse button down.)
Most people didn’t (and don’t) understand the contextual difference and themes of interface to design a two-button mouse interface.
The current system is to throw design patterns against the wall and copy those that stick.
Robert Wright, The Moral Animal
-- Chinese proverb
Ian Stewart invented the game of tautoverbs. Take a proverb and manipulate it so that it’s tautological. i.e. “Look after the pennies and the pennies will be looked after” or “No news is no news”. There’s a kind of Zen joy in forming them.
This proverb however, is already there.
--Nicholas Epley, “Blackwell Handbook of Judgment and Decision Making”
Google tells me Dennett referred to this, in arguing that there is nothing mysterious about consciousness, because it is just a set of many tricks.
It’s a shame that the niceness of the story of the tuned deck makes Dennett’s bad argument about consciousness more appealing.
Dennett’s argument that there is no hard problem of consciousness can be summarized thus:
Take the hard problem of consciousness.
Add in all the other things anybody has ever called “consciousness”.
Solve all those other issues one by one.
Conveniently forget about the hard problem of consciousness.
Would this count as doing something deliberately complicated to throw off anyone with an Occam prior?
You don’t have to put the little ‘>’ signs in on every line, just the beginning of a paragraph.
Fixed. Thanks.
.
(Only in the sense of constructing some plan of action (or inaction) that currently seems no worse than others, not in the sense of deciding to believe things you have no grounds for believing. “Make up your mind” is a bad phrase because of this equivocation.)
...Unless your decision makes things worse.
David Hull, Science and Selection: Essays on Biological Evolution and the Philosophy of Science
This is the idea behind duel-N back, that the only strategy your lazy brain can implement to do better at the game is to increase the brain’s working memory.
Nietzsche, Beyond Good and Evil
-- Ari Rahikkala
Is this really a rationality quote, is it just pro-Yudkowsky?
It does set a standard for the clarity of any writing you do, but I’ve seen substantially better quotes on that topic before.
I say yes. This is the difference between learning the ‘Philosophy’ how to quote deep stuff with names like Wittgenstein and Nietzsche and just learning stuff about reality that is just obvious. Once the knowledge is there is shouldn’t seem remarkable at all.
For me at least this is one of the most important factors when evaluating a learning source. Is the information I’m learning simple in retrospect or is it a bunch of complicated rote learning. If the latter, is there a good reason related to complexity in the actual world that requires me to be learning complex arbitrary things?
Related to hindsight bias and inferential distances. I’d sort of noticed this happening before, but if I hadn’t realized other people had the same experience I probably would have underestimated the degree to which rationality had changed my worldview and so underestimated the positive effect of spreading it to others.
(Your “\” key is adjacent to “Shift”.)
--Mencius Moldbug
Laplace
Joe Sobran
Is that the case?
The majority dreams about a “just society”, the minority dreams about a better one through technological advances. No matter there was 20th century when “socialism” brought us nothing and the technology brought us everything.
Echoing a utopian meme is analogous to stamping an instance of an invention, not to inventing something anew. It is inventors of utopian dreams that I doubt to be more numerous than inventors of technology.
And let’s not forget how many millions of patents there are; I don’t think there are that many millions of utopias, even if we let them differ as little as patents can differ.
You may be right here. Utopias are usually also quite uninnovative. “All people will be brothers and sisters with enough to eat and Bible (or something else stupid) reading in a community house every night”.
Variations are not that great.
Can you invent a utopia? A utopia is an incoherent concept about a society that contains too many internal contradictions or impracticalities to ever exist. Thus, it cannot be invented any more than a perpetual motion machine can be.
If you do consider utopias inventable, what’s the difference between “inventing a new utopia” and “having a new preference”? You want X; you dream of a world where you get X, inventing Utopia X.
I feel obliged to point out that Socialdemocracy is working quite well in Europe and elsewhere and we owe it, among other stuff, free universal health care and paid vacations. Those count as “hidden potentiality of the real.” Which brings us to the following point: what’s , a priori, the difference between “hidden potentiality of the real” and “unreal”? Because if it’s “stuff that’s actually been made”, then I could tell you, as an engineer, of the absolutely staggering amount of bullshit patents we get to prove are bullshit everyday. You’d be amazed how many idiots are still trying to build Perpetual Motion Machines. But you’ve got one thing right: we do owe technology everything, the same way everyone ows their parents everything. Doesn’t mean they get all the merit.
It’s not fair to say we ‘owe’ Socialdemocracy for free universal health care and paid vacations, because they aren’t so much effects of the system as they are fundamental tenets of the system. It’s much like saying we owe FreeMarketCapitalism for free markets—without these things we wouldn’t recognize it as socialism. Rather, the question is whether the marginal gain in things like quality of living are worth the marginal losses in things like autonomy. Universal health care is not an end in itself.
I dunno man, maybe it’s a confusion on my part, but universal health coverage for one thing seems like a good enough goal in and of tiself. Not specifically in the form of a State-sponsored organziation, but the fuction of everyone having the right to health treatments, of no-one being left to die just because they happen not to have a given amount of money at a given time, I think that, from a humanistic point of view, it’s sort of obvious that we should have it if we can pay for it.
Free universal health care is a good thing in itself; the question is whether or not that’s worth the costs of higher taxes and any bureaucratic inefficiencies that may exist.
The healthcare isn’t actually “free”. It’s either paid for individually, collectively on a national level, or some intermediate level, e.g., insurance companies. The question is what the most efficient way to deliver it is?
Well, at least the bureaucratic inefficiencies are entirely incidental to the problem, and there’s no decisive evidence for corporate bureaucracies to be any better than public ones (I suspect partisanship gets in the way of finding out said evidence, as well as a slew of other variables), so that factor… doesn’t factor. As for the higher taxes… how much are you ready to pay so that, the day you catch some horrible disease, the public entity will be able to afford diverting enough of its resources to save you? What are you more afraid of, cancer and other potentially-fatal diseases that will eventually kill you, terrorism/invading armies/criminals/people trying to kill you, boredom...? What would be your priorities in assigning which proportion of the taxes you pay goes to funding what projects?
… Actually that might be a neat reform. Budget decision by a combination of individual budget assignments by every citizen…
This claim is disputed, but I have negligible information either way.
Personally, I say that universal health care would be worth the higher taxes. For any given person the answer depends on their utility function: the relative values assigned to freedom, avoidance of harm, happiness, life, fairness,etc.
This sets off my Really Bad Idea alarm. I don’t trust the aggregate decisions of individual citizens to add up to any kind of sane budget relative to their CEV. (Note: the following sentences are American-centric.) Probably research would get massively under-funded. Defense would probably be funded less than it is now, but that might well put it closer to the optimal value if it forced some cost-effectiveness increases.
Basically, each person would assign all their taxes to whatever they thought was most important, thus prioritizing programs according to how many people pick them as first choice, regardless of how many dollars it takes to make a given one work. The same kind of math used to discuss different voting/electoral college variants would inform this, I think, but I’m too lazy to look it up. And of course, if too much freedom was allowed in deciding, all companies and most people would decide to allocate their money to themselves.
^Hm. That’d be some very near-sighted companies and people, don’t you think? The Defending Your Doorstep fallacy etc. etc. Still, with some education fo the public (“Dear viewers, THIS is what would happen if everyone decided all the money should go to the Army right after a terrorist attack”) and some patches (I can’t imagine why people would put all their money into whatever they think is most important, rather than distributing it in an order of priorities: usually people’s interests aren’t so clear cut that they put one cause at such priority that the others become negligible… but if they did do that, just add a rule that there’s only so much of your money you can dedicate to a specific type of endeavor and all endeavors related),.
This reminds me of Kino’s Journey and the very neat simplisty solutions people used to their problems. The main reason those solutions failed was because the involved people were incrediby dumb at using them. The Democracy episode almost broke my willing suspension of disbelief, as did the Telepathy one. Are you familair with that story?
Re your 1st paragraph: you have a much higher opinion of human rationality than myself. I hope you’re right, but I doubt it.
Re your second paragraph: I am currently watching Kino’s Journey, and will respond later. Thanks for the reference, it sounds interesting.
Human rationality can be trained and improved, it’s not an innate feature. To do that is part of the entire point of this site.
I hope you enjoy it. It is very interesting. Beware of generalizing from fictional evidence… but fiction is sometimes all we have to explore certain hypotheticals...
True. Individual budget allocation would be a bad idea in present day America, but it wouldn’t be a bad idea everywhere and for all time.
What does this mean? In particular, what does “universal” mean?
It means that each person in the country would, if ey got sick, be able to receive affordable treatment. This is true in, for example, Great Britain, where the NHS pays for people’s medical care regardless of their wealth. It is not true in the United States, where people who cannot afford health insurance and do not have it provided by their employer go without needed treatments because they can’t afford them.
ETA: does someone think this definition is wrong? What’s another definition I’m missing?
How different are the ways a society would treat citizens and various other people not covered by a system, such as Americans? What about tourists?
Isn’t it true that Great Britain could provide better medical care if it diverted resources currently spent elsewhere? How are any other government expenditures and fungible things (like autonomy) ever justified if health could be improved with more of a focus on it?
Do you primarily value a right to medical care, or instead optimal health outcomes?
An intuition pump: What if a genie offered to, for free, provide medical care to all people in a society equivalent to that the American President gets, and a second genie, much better at medical care, offered even better average health outcomes for all people, with the caveat that he would randomly deny patients care (every patient would still have a better chance under the second genie, until the patient was rejected, of course). Both conditional on no other health care in the society, especially not for those denied care by the second genie. Which genie would you choose for the society? Under the first, health outcomes would be good and everyone would have a right to health care, under the second, health outcomes would be even better for every type of patient, but there would be no right to care and some people with curable diseases would be left to die.
If you would choose the first genie, your choice increases net suffering every person can expect.
If you would choose the second genie, then you’re making a prosaic claim about the efficiency of systems rather than a novel moral point about rights for disadvantaged people—a claim that must be vulnerable to evidence and can’t rightly be part of your utility function.
What if there were a third genie much like the one you chose, except the third genie could provide even better care to rich people. Would you prefer the third genie and the resulting inequality?
I prefer the genie which provides the maximum average utility* to the citizens, with the important note that utility is probably non-linear in health. The way I read your comment, that would appear to be the third. Also note that the cost of providing health care is an important factor in real life, because that money could also go to education. Basically, I do my best to vote like a rational consequentialist interested in everyone’s welfare.
*I am aware that both average and total utilitarianism have mathematical issues (repugnant conclusion etc), but they aren’t relevant here.
OK, so when you say “Personally, I say that universal health care would be worth the higher taxes,” you are referring to internal resource distribution along state, national, voluntary, or other lines to achieve efficient aggregate outcomes by taking advantage of the principle of diminishing returns and taking from the rich and giving to the poor. You don’t believe in a right to care, or equal treatment for outgroup non-citizens elsewhere, or that it’s very important for treatment to be equal between elites and the poor. Not an unusual position, it’s potentially coherent, consistent, altruistic, and other good things.
I asked for clarification from your original “Personally, I say that universal health care would be worth the higher taxes,” because I think that phrasing is compatible with several other positions.
Corporations that develop excessive inefficiencies tend to go bankrupt. (Ok, sometimes they can get government bailouts or are otherwise propped up by the government, but that is another against government intervention.)
Not if all their competitors are also inefficient.
I don’t see a corporate world which has spawned works like the Dilbert comics as inside critiques as a compelling example of a race to the top.
The advertising would get very tiresome, but probably not bad enough to oppose the idea for that reason.
Well, maybe if it wasn’t for the taxes I would be able to afford to pay for treatment myself. (Taxes are a zero-sum process, actually negative sum because of the inefficiencies.) If the idea is risk mitigation, then why not use a private insurance company?
Well, given that the government’s allledged goal is to provide the service while the private organization’s alledged goal is to make a profit, one would expect the State (I like to call the organization the State or the Adminsitration: the Government should simply mean whoever the current team of politically appointed president/minister/cabinet are, rather than the entire bureaucracy) to be less likely to “weasel out of” paying for your treatment, a risk I (in complete and utter subjectivity and in the here and now) deem more frightening (and frustrating) than the disease itself.
And yes, risk mitigation is always negative sum, that’s kind of a thermodynamic requisite.
Well, since the ministry of health’s budget is finite, whereas the potential amount of money that could be spent on everyone’s treatment isn’t, the state very quickly discovers that is too needs to find ways to weasel out of paying for treatment.
And the more layers of bureaucracy involved, the more negative sum it is.
1.You mean they incur in the exact same kind of legal practices as private groups, with the same frequency? Given the difference in position, methodolgy and resourses, I doubt it, but I don’t have any evidence pointing to either side about the behavior of Universal Health Coverage systems. I’d need time to ask a few people and find a few sources.
2.I don’t think it’s a matter of “layers” so much as one of how those layers are organized. The exact same amount of people can have productivity outputs that are radically different in function of the algorythms used to organize their work. Your post seems to imply that State services have more bureaucratic layers than public ones. I’d think that’d be something to decide case by case, but I wouldn’t say it’s a foregone conclusion: private insurances are infamous for being bureauratic hells too. Ones deliberately designed to mislead and confuse unhappy clients, at that.
This conversation appears to not have incorporated the very strong evidence that higher health care spending does lead to improved health outcomes.
Personally I’d reform the American system in one of two ways- either privatize health care completely so that cost of using a health care provider is directly connected to the decision to use health care OR turn the whole thing over to the state and ration care (alternatively you could do the latter for basic health care and than let individuals purchase anything above that). What we have now leaves health care consumption decisions up to individuals but collectivizes costs—which is obviously a recipe for inflating an industry well above its utility.
At what margin? Using randomized procedures?
http://www.overcomingbias.com/2009/01/free-medicine-no-help-for-ghanaian-kids.html
http://www.overcomingbias.com/2007/05/rand_health_ins.html
http://www.overcomingbias.com/2011/07/the-oregon-health-insurance-experiment.html
This instance of that conversation.
What does this mean?
What does this mean?
What does this mean?
I have left it ambiguous on purpose. What this means specifically depends on the means available at any given time.
IDEALLY: Universal means everyone should have a right to as much health service as is necessary for their bodies and minds functioning as well as it can, if they ask for it. That would include education, coaching, and sports, among many others. And nobody should ever be allowed to die if they don’t want to and there’s any way of preventing it.
Between “leaving anyone to die because they don’t have the money or assets to pay for their treatment”[your question puzzles me, what part of this scenario don’t you understand] and “spending all our country’s budget on progressively changing the organs of seventy-year-.olds”, there’s a lot of intermediate points. The touchy problem is deciding how much we want to pay for, and how, and who pays it for whom, No matter how you cut the cake, given our current state of development, at some point you have to say X person dies in spite of their will because either they can’t afford to live or because his can’t”. So, are you* going to deny that seventy-year-old their new organs?
Yes, it’s amazing how many bad decisions are made because it’s heartbreaking to just say no.
More like it’s potentially corrupting, but yeah, that too.
Yes, unless there is nobody else that can use them. If my watching of House tells me anything it is standard practice to prioritize by this kind of criteria.
I like this answer, if only for emotional reasons :). I also think the vast majority of seventy-years-old would be compelled by this argument.
Resources are limited and medical demand is not. The medical response time if the President of the United States gets shot is less for than if anyone else gets shot. It’s not possible to give everyone as much health protection as the president. So it’s not a scenario. I can imagine each person as being the only person on earth with such care, and I can imagine imagining a single hypothetical world has each person with that level of care, but I can’t actually imagine it.
That indicates that no argument about the type of thing to be done will be based on a difference in kind. It won’t resemble saying that we should switch from what happens at present to “no-one being left to die just because they happen not to have a given amount of money”. We currently allow some people to die based on rationing, and you are literally proposing the impossible to connote that you would prefer a different rationing system, but then you get tripped up when sometimes speaking as if the proposal is literally possible.
Declaring that someone has a right is declaring one’s willingness to help that person get something from others over their protests. We currently allow multimillionaires, and we allow them to spend all their money trying to discover a cure for their child’s rare or unique disease, and we allow people to drive in populated areas.
We allow people to spend money in sub-optimal ways. Resources being limited means that not every disease gets the same attention. Allowing people to drive in populated areas is implicitly valuing the fun and convenience of some people driving over the actuarially inevitable death and carnage to un-consenting pedestrians.
I don’t understand how you want to ration or limit people, in an ideal world, because you have proposed the literally impossible as a way of gesturing towards a different rationing system (infinitely) short of that ideal and (as far as I can see) not different in kind than any other system.
By analogy, you don’t describe what you mean when you declare “infinity” a number preferable to 1206. Do you mean that any number higher than 1206 is equally good? Do you mean that every number is better than its predecessor, no matter what? Since you probably don’t, then...what number do you mean? Approximately?
I can perhaps get an idea of the function if you tell me some points of x (resources) and y (what you are proposing).
Not quite. ER doctor.
Your post confuses me a lot: I am being entirely honest about this, there seem to be illusions of transparency and (un)common priors. The only part I feel capable of responding to is the first: I can perfectly imagine every human being having as much medical care as the chief of the wealthiest most powerful organization in the world, in an FAI-regimented society. For a given value of “imagining”, of course: I have a vague idea of nanomachines in the bloodstream, implants, etc. I basically expect human bodies to be self-sufficient in taking care of themsleves, and able to acquire and use the necessary raw materials with ease, including being able to medically operate on themselves. The rare cases will be left to the rare specialist, and I expect everyone to be able to take care of the more common problems their bodies and minds may encounter.
As for the rest of your post:
What are people’s rationing optimixation functions? Is it possible to get an entire society to agree to a single one, for a given value of “agree”? Or is it that people don’t have a consistent optimization function, and that it’s not so much a matter of some things being valued over others as a matter of tradition and sheer thoughtless inertia? Yes, I know I am answering questions with questions, but that’s all I got right now.
Thank you for leading with that.
This seems to sidestep the limited resources issue, making your argument not clearly apply outside of that context.
Let me give an example outside of health to discuss the resources issue. I have read that when a guy tried to make a nuclear power source in his garage from clock parts, government agents swooped in very soon after it started emitting radiation—presumably there are people monitoring for that, with field agents ever-ready to pursue leads. This means that, for some 911 calls where the nuclear team would be the first to the scene, we allow the normal police to handle it, even at the risk of people’s lives. If that isn’t the case, imagine a world in which it were so, and in which it would be easy to tell that the police would be slower than the nuke guys (who don’t even leave their stations most days). I think having such an institution would be worthwhile, even at the cost of crimes in progress being responded to slower.
Similarly, I think many things would be worth diverting resources from better policing, such as health—and from health to other things, such as better policing, and from both to fun, privacy, autonomy, and so forth. I’m only referring to a world in which resources are limited.
It is possible that there is a society wealthy enough to ensure very good health care for those it can influence by eliminating all choice about what to eat, mandating exercise, eliminating privacy to enforce those things, etc. It’s not obvious to me that it’s always the right choice to optimize health or that that would be best for the hypothetical society.
Considering the principle of diminishing returns, there’s no plausible way of describing people’s preferences such that all effort should be put towards better health. we don’t have to be able to describe them perfectly to say that being forced to eat only the healthiest foods does not comport with them—ask any child told to eat vegetables before desert.
Comfortable, well maintained social democracies where the result of a very peculiar set of circumstances and forces which seem very unlikely to return to Europe in the foreseeable future.
Would you care to expand on that?
Sure, though I hope you don’t mind me giving the cliff note version.
Demographic dividend is spent. (The rate of dependency falls after the introduction of modernity (together with legalised contraception) because of lower birth rates. It later rises again as the population ages a few decades after the drop in birthrates)
Related, precisely because the society on average is old and seems incapable of embracing any kind of new ideas or a change in what its stated ideals and values are. Not only are young people few but they extremely conformist outside of a few designated symbolic kinds of “rebelling” compared to young people in other parts of the world. Oversocialized indeed.
Free higher education and healthcare produced a sort of “social uplift dividend”, suddenly the cycle of poverty was broken for a whole bunch of people who where capable of doing all kinds of work, but simply didn’t have the opportunity to get the necessary education to do so. After two generations of great results not only has this obviously hit diminishing returns, there are also some indications that we are actually getting less bang for buck on the policies as time continues. Though its hard to say since European society has also shifted away from meritocracy.
Massive destruction of infrastructure and means of production that enabled high demand for rebuilding much of the infrastructure (left half of the bell curve had more stuff to do than otherwise, since the price of the kinds of labour they are capable of was high).
The burden of technological unemployment was not as great as it is today (gwern’s arguments regarding its existence where part of what changed my opinion away from the default view most economists seem to take. After some additional independent research I found myself not only considering it very likley but looking at 20th century history from an entirely fresh perspective ).
Event though there are some indications youth in several European countries is more trusting, the general trend seem to still be a strong move away from high trust societies.
Thank you. Cliff notes is fine. What do you expect social democracies to turn into?
I put significantly lower confidence in these predictions than those of the previous post.
Generally speaking I expect comfortable, well maintained social democracies to first become uncomfortable, run down social democracies. Stagnation and sclerosis. Lower trust will mean lower investment which together with the rigidity and unadaptability will strengthen the oligarchic aspect of the central European technocratic way of doing things. Nepotism will become more prevalent in such an environment.
Overall violent crime will still drop, because of better surveillance and other crime fighting technology, but surprising outbursts of semi organized coordinated violence will be seen for a decade or two more (think London). These may become targeted at prosperous urban minorities. Perhaps some politically motivated terrorist attacks, which however won’t spiral out into civil wars, but will produce very damaging backlash (don’t just think radical Islam here, think Red Army fraction spiced with a nationalist group or two).
What, you mean like in Gangs of New York?
Could you please give more links to the stuff that helped you form these opinions? I’m very interested in this, especialy in explaining the peculiar behaviour of this generation’s youth as opposed to that of the Baby Boomers when they were the same age. After all, it’s irrational to apply the same tactics to a socipoloitical lanscape that’s wildly different from the one in which these tactics got their most spectacular successes. Exiting the mind-killing narratives developed in bipartidist systems and finding the way to rethink the problems of this age from scratch is a worthy goal for the rationalist project, especially in a “hold off on proposing solutions”, analyze-the-full-problem-and-introduce-it-from-a-novel-angle sense. Publications such as, say, Le Monde Diplomatique, are pretty good at presenting well-researched, competently presented alternative opinions, but they still suffer a lot from “political leanings”.
I know we avoid talking politics here because of precisely its mind-killing properties, able to turn the most thoughtful of agents into a stubborn blind fool, but I think it’s also a good way of putting our skills to the test, and refine them.
Be fair. We tried socialism once (in several places, but with minor variations). We tried a lot of technology, including long before the 20th century.
I think socialism must fail because humans once freed from material want will compete for status. Status inequality will activate much the same sentiments as material inequality did. To level status one needs to embark on a massive value engineering campaign. These have so far always created alternative status inequalities, thus creating internal contradictions which combined with increasing material costs eventually bring the dissolution of the system and a partial undoing of the engineering efforts.
If technology advances to the point where such massive social engineering becomes practical and is indeed used for such a purpose on the whim of experts in academia/a democratic consensus/revolutionary vanguard… the implications are simply horrifying.
--Haruki Murakami, Kafka on the Shore, 2006, p. 255
Not if Western society is anything to go by. Not asking (but knowing the answer) produces a lifetime’s worth of successes, as far as I can tell.
--The Lion King opening song
Do you consider this a promotion of fun theory? Or a justification for living forever?
Both.
Can also be an indication that everything is more than one person/mind can handle. By stepping into the sun, we enjoy the warmth and may be overwhelmed by the world as we see it. The song’s lyrics seem cautionary, indicating that despite the warmth of being in the world do not attempt to see everything, do not attempt to do everything? This is rational, there are things we may not enjoy as much as others. To reduce our overall enjoyment by not placing parameters on our activities would be irrational in my opinion.
My initial guess was “keep learning, there’s always more to learn.”
Sheldon Ross
Keith Stanovich, What Intelligence Tests Miss
Better memory and processing power would mean that probabilistically more businessmen would realize there are good business opportunities where they saw none before. Creating more jobs and a more efficient economy, not the same economy more quickly.
ER doctors can now spend more processing power on each patient that comes in. Out of their existing repertoire they would choose better treatments for the problem at hand then they would have otherwise. A better memory means that they would be more likely to remember every step on their checklist when prepping for surgery.
It is not uncommon for people to make stupid decisions with mild to dire consequences because they are pressed for time. Everyone now thinks faster and has more time to think. Few people are pressed for time. Fewer accidents happen. Better decisions are made on average.
There are problems which are not human vs human but are human vs reality. With increased memory and processing power humanity gains an advantage over reality.
By no means is increasing memory and processing power a sliver bullet but it seems considerably more then everything only moving “much more quickly!”
Edit: spelling
The potential problem with your speculation is that the relative reduction of the mandatory-work / cognitive-power ratio may be a strong incentive to increase individual work load (and maybe massive lay-offs). If we’re reasonable, and use our cognitive power wisely, then you’re right. But if we go the Hansonian Global Competition route, the Uber Doctor won’t spend more time on each patient, but just as much time on more patients. There will be too much Doctors, and the worst third will do something else.
Possibly because people would be driving faster?
It’s a nice list, but I think the core point strikes me as liable to be simply false. I forget who it was presenting this evidence—it might even have been James Miller, it was someone at the Winter Intelligence conference at FHI—but they looked at (1) the economic gains to countries with higher average IQ, (2) the average gains to individuals with higher IQ, and concluded that (3) people with high IQ create vast amounts of positive externality, much more than they capture as individuals, probably mostly in the form of countries with less stupid economic policies.
Maybe if we’re literally talking about a pure speed and LTM pill that doesn’t affect at all, say, capacity to keep things in short-term memory or the ability to maintain complex abstractions in working memory, i.e., a literal speed and disk space pill rather than an IQ pill.
Absolutely—IQ is very important, especially in aggregate. And yet, I’d still bet that the next day people will just be moving faster.
I think its worth making the distinction between having hardware which can support complex abstractions and actually having good decision making software in there. Although it’d be foolish to ignore the former because it tends to lead to the latter, it seems to be the latter that is more directly important.
That, and the fact that people can generally support better software than they pick up on their own is what makes our goal here doable.
If this is true, it would affect my decisions about whether and how to have children. So I’d really like to see the source if you can figure out what it was.
James Miller says:
That’s helpful; thanks.
Sounds plausible. If anybody finds the citation for this, please post it.
How about http://www.psychologicalscience.org/index.php/news/releases/are-the-wealthiest-countries-the-smartest-countries.html ?
Citing “Cognitive Capitalism: The impact of ability, mediated through science and economic freedom, on wealth”. (PDF not immediately available in Google.)
EDIT: efm found the PDF: http://www.tu-chemnitz.de/hsw/psychologie/professuren/entwpsy/team/rindermann/publikationen/11PsychScience.pdf
Or http://www.nickbostrom.com/papers/converging.pdf :
EDITEDIT: high IQ predicts superior stock market investing even after the obvious controls. High IQ types are also more likely to trust the stock market enough to participate more in it
“Do you have to be smart to be rich? The impact of IQ on wealth, income and financial distress”, Zagorsky 2007:
One could also phrase this as: “if we control for factors which we know to because by intelligence, such as highest level of education, then mirabile dictu! intelligence no longer increases income or wealth very much!”; or, “regressions are hard, let’s go shopping.”
Apropos of http://lemire.me/blog/archives/2012/07/18/why-we-make-up-jobs-out-of-thin-air/
Intelligence: A Unifying Construct for the Social Sciences, Lynn & Vanhanen 2012 (excerpts)
“IQ in the Ramsey Model: A Naïve Calibration”, Jones 2006:
That quote does not appear to come from the linked paper, and I’m confused as to how a paper from 2006 was supposed to have a citation from 2009.
Only the first paragraph is wrong (mixed it up with a paper on the Swiss iodization experience I’m using in a big writeup on iodide self-experimentation). Fixed.
“Economic gains resulting from the reduction in children’s exposure to lead in the United States”, Grosse et al 2002 (fulltext)
Their summary estimate from pg5/567 is a lower-middle-upperbound of each IQ point is worth, in net present value 2000 dollars: 12,700-14,500-17,200.
(Note that these figures, as usual, are net estimates of the value to an individual: so they are including zero-sum games and positional benefits. They aren’t giving estimates of the positive externalities or marginal benefits.)
“Quality of Institutions : Does Intelligence Matter?”, Kalonda-Kanyama & Kodila-Tedika 2012:
“IQ and Permanent Income: Sizing Up the “IQ Paradox””:
“Are Smarter Groups More Cooperative? Evidence from Prisoner’s Dilemma Experiments, 1959-2003”, Jones 2008:
Later: http://econlog.econlib.org/archives/2012/10/group_iq_one_so.html
What if higher SAT schools tend to be more prestigious and have stronger student identification?
Dunno. It’s consistent with all the other results about IQ and not school spirit...
Hm. Looks like going to a public/private school didn’t seem to mediate student cooperation all that much, which probably works against my theory.
They’re all US studies. Do we have anything from other cultures?
“IQ in the Production Function: Evidence from Immigrant Earnings”, Jones & Schneider 2008:
“Costs and benefits of iodine supplementation for pregnant women in a mildly to moderately iodine-deficient population: a modelling analysis” (mirror; appendices), Monahan et al 2015
IQ estimates:
All the details are in the Monahan et al 2015 appendices
The 8 studies are listed on pg8 of the appendix, Table 1:
Fletcher J. “Friends or Family? Revisiting the Effects of High School Popularity on Adult Earnings”. 2013. National Bureau of Economic Research Working Papers: 19232
Lutter RW. “Valuing children’s health: A reassessment of the benefits of lower lead levels”. AEI-Brookings Joint Center Working Paper No. 00-02. 2000.
Mueller G, Plug E. “Estimating the Effect of Personality on Male and Female Earnings”. Ind Lab Relat Rev. 2006;60(1):3-22.
Salkever DS. “Updated estimates of earnings benefits from reduced exposure of children to environmental lead”. Environ Res. 1995;70(1):1-6.
Schwartz J. “Societal benefits of reducing lead exposure”. Environ Res. 1994;66(1):105-24.
de Wolff P, van Slijpe ARD. “The Relation Between Income, Intelligence, Education and Social Background”. Europ Econ Rev. 1973;4(3):235-64.
Zax JS, Rees DI. IQ, “Academic Performance, Environment, and Earnings”. Rev Econ Stat. 2002;84(4):600-16
Zagorsky JL. “Do you have to be smart to be rich? The impact of IQ on wealth, income and financial distress”. Intelligence. 2007;35(5):489-501.
(Note that by including covariates that are obviously caused by IQ rather than independent, and excluding any attempt at measuring the many positive externalities of greater intelligence, these numbers can usually be considered substantial underestimates of country-wide benefits.)
“The High Cost of Low Educational Performance: the long-run economic impact of improving PISA outcomes”, Hanushek & Woessmann 2010:
Needless to say, “cognitive skills” here is essentially an euphemism for intelligence/IQ.
But but Goodhart’s law!
And it’s also confusing correlation with causation; grading is in large part due to intelligence. Boosting scores may be useless.
“Education, Intelligence, and Attitude Extremity”, Makowsky & Miller 2012
“The relationship between happiness and intelligent quotient: the contribution of socio-economic and clinical factors”, Ali et al 2012; effect is weakened once you take into account all the relevant variables but does sort of still exist.
I think that you might be confusing causation and correlation here. Countries that started to industrialize earlier have higher average IQ and higher GDP per capita. That would produce the effect you refer to. Whether or not the increased intelligence then contributes to further economic growth is a different matter.
What third factor producing both higher IQ and then industrialization are you suggesting?
Obviously you’re not suggesting anything as silly as the industrialization causes all observed IQ changes, because that simply doesn’t explain all examples, like East Asian countries:
That suggests that the correlation would have been less at that earlier time, which suggests the idea that the correlation of average IQ and average income has varied over history. Perhaps it has become stronger with increasing technological level—that is, more opportunities to apply smarts?
That certainly seems possible. Imagine a would-be programming genius who is born now, versus born in the Stone Age—he could become the wealthiest human to ever live (Bill Gates) or just the best hunter in the tribe (to be optimistic...).
Rindermann 2011: “Intellectual classes, technological progress and economic development: The rise of cognitive capitalism”; from abstract:
Here’s another one: “National IQ and National Productivity: The Hive Mind Across Asia”, Jones 2011
Above link is dead. Here is a new one
http://mason.gmu.edu/~gjonesb/JonesADR
“Exponential correlation of IQ and the wealth of nations”, Dickerson 2006:
It peeves me when scatterplots of GDP per capita versus something else use a linear scale—do they actually think the difference between $30k and $20k is anywhere near as important as that between $11k and $1k? And yet hardly anybody uses logarithmic scales.
Likewise, the fit looks a lot less scary if you write it as ln(GDP) = A + B*IQ.
Yes, Dickerson does point out that his exponential fit is a linear relationship on a log scale. For example, he does show a log-scale in figure 3 (pg3), fitting the most reliable 83 nation-points on a plot of log(GDP) against mean IQ in which the exponential fit looks exactly like you would expect. (Is it per capita? As far as I can tell, he always means per capita GDP even if he writes just ‘GDP’.) Figure 4 does the same thing but expands the dataset to 185 nations. The latter plot should probably be ignored given that the expansion comes from basically guessing:
Is it easy to compare the fit of their theory to the smart fraction theory?
I dunno. I’ve given it a try and while it’s easy enough to reproduce the exponential fit (and the generated regression line does fit the 81 nations very nicely), I think I screwed up somehow reproducing the smart fraction equation because the regression looks weird and trying out the smart-fraction function (using his specified constants) on specific IQs I don’t get the same results as in La Griffe’s table. And I can’t figure out what I’m doing wrong, my function looks like it’s doing the same thing as his. So I give up. Here is my code if you want to try to fix it:
(In retrospect, I’m not sure it’s even meaningful to try to fit the
sf
function with the constants already baked in, but since I apparently didn’t write it right, it doesn’t matter.Hm, one thing I notice is that you look like you’re fitting sf against log(gdp). I managed to replicate his results in octave, and got a meaningful result plotting smart fraction against gdp.
My guess at how to change your code (noting that I don’t know R):
That should give you some measure of how good it fits, and you might be able to loop it to see how well the smart fraction does with various thresholds.
(I also probably should have linked to the refinement.)
I can’t tell whether that works since you’re just using the same broken smart-fraction
sf
predictor; eg.sf(107,108)
~> 32818, while the first smart fraction page’s table gives a Hong Kong regression line of 19817 which is very different from 33k.The refinement doesn’t help with my problem, no.
Hmmm. I agree that it doesn’t match. What if by ‘regression line’ he means the regression line put through the sf-gdp data?
That is, you should be able to calculate sf as a fraction with
And then regress that against gdp, which will give you the various coefficients, and a much more sensible graph. (You can compare those to the SFs he calculates in the refinement, but those are with verbal IQ, which might require finding that dataset / trusting his, and have a separate IQ0.)
Comparing the two graphs, I find it interesting that the eight outliers Griffe mentions (Qatar, South Africa, Barbados, China, and then the NE Asian countries) are much more noticeable on the SF graph than the log(GDP) graph, and that the log(GDP) graph compresses the variation of the high-income countries, and gets most of its variation from the low-income countries; the situation is reversed in the SF graph. Since both our IQ and GDP estimates are better in high-income countries, that seems like a desirable property to have.
With outliers included, I’m getting R=.79 for SF and R=.74 for log(gdp). (I think, I’m not sure I’m calculating those correctly.)
Trying to rederive the constants doesn’t help me, which is starting to make me wonder if he’s really using the table he provided or misstated an equation or something:
If you double 34779 you get very close to his $69,321 so there might be something going wrong due to the 1⁄2 that appears in uses of the
erf
to make a cumulative distribution function, but I don’t how a threshold of 99.64 IQ is even close to his 108!(The weird start values were found via trial-and-error in trying to avoid R’s ‘singular gradient error’; it doesn’t appear to make a difference if you start with, say,
f=90
.)Most importantly, we appear to have figured out the answer to my original question: no, it is not easy. :P
So, I started off by deleting the eight outliers to make lynn2. I got an adjusted R^2 of 0.8127 for the exponential fit, and 0.7777 for the fit with iq0=108.2.
My nls came back with an optimal iq0 of 110, which is closer to the 108 I was expecting; the adjusted R^2 only increases to 0.7783, which is a minimal improvement, and still slightly worse than the exponential fit.
The value of the smart fraction cutoff appears to have a huge impact on the mapping from smart fraction to gdp, but doesn’t appear to have a significant effect on the goodness of fit, which troubles me somewhat. I’m also surprised that deleting the outliers seems to have improved the performance of the exponential fit more than the smart fraction fit, which is not what I would have expected from the graphs. (Though, I haven’t calculated this with the outliers included in R, and I also excluded the Asian data, and there’s more fiddling I can do, but I’m happy with this for now.)
And inadvertently provided an object lesson for anyone watching about the value of researchers providing code...
My intuition so far is that La Griffe found a convoluted way of regressing on a sigmoid, and the gain is coming from the part which looks like an exponential. I’m a little troubled that his stuff is so hard to reproduce sanely and that he doesn’t compare against the exponential fit: the exponent is obvious, has a reasonable empirical justification. Granting that Dickerson published in 2006 and he wrote the smart fraction essay in 2002 he could at least have updated.
You need to delete any trailing whitespace in your indented R terminal output. (Little known feature of LW/Reddit Markdown code blocks: one or more trailing spaces causes the newline to be ignored and the next line glommed on. I filed an R bug to fix some cases of it but I guess it doesn’t cover
nls
or you don’t have an updated version.)I don’t understand your definition
sf(iq,iq0)
makes sense, of course, andm
presumably is the multiplicative scale constant LG found to be 69k, but what is thisb
here and why is it being added? I don’t see how this tunes how big a smart fraction is necessary since shouldn’t it then be on the inside ofsf
somehow?But using that formula and running your code (using the full dataset I posted originally, with outliers):
I emailed La Griffe via Steve Sailer in February 2013 with a link to this thread and a question about how his smart-fraction model works with the fresher IQ/nations data and compares to Dickerson’s work. Sailer forwarded my email, but neither of us has had a reply since; he speculated that La Griffe may be having health issues.
In the absence of any defense by La Griffe, I think Dicker’s exponential works better than La Griffe’s fraction/sigmoid.
The theoretical justifications are entirely different, though. It seems reasonable to me to suppose there’s some minimal intelligence to be wealth-producing in an industrial society, and the smart fraction estimates that well and it predicts gdp well. But, it also seems reasonable to treat log(gdp) as a more meaningful object than gdp.
It’s also bothersome that the primary empirical prediction of the smart fraction model (that there is some stable gdp level that you hit when everyone is higher than the smart fraction) is entirely from the extrapolated part of the dataset, and this doesn’t seem noticeably better than the exponential model, whose extrapolations are radically different.
Yeah; I’m curious what they’d have to say about the relative merits of the two models. I’ll see if I can get this question to them.
Fixed, thanks!
It’s an offset, so that it’s an affine fit rather than a linear fit: the gdp level for a population with no people above 108 IQ doesn’t have to be 0. Turns out, it’s not significantly different from zero, but I’d rather discover that than enforce it (and enforcing it can degrade the value for m).
I’m not entirely sure… For individuals, log-transforms make sense on their own merits as giving a better estimate of the utility of that money, but does that logic really apply to a whole country? More money means more can be spent on charity, shooting down asteroids, etc.
The next logical step would be to bring in the second 2006 edition of the Lynn dataset, which increased the set from 81 to 113, and use the latest available per-capita GDP (probably 2011). If the exponential fit gets better compared to the smart-fraction sigmoid, then that’s definitely evidence towards the conclusion that the smart-fraction is just a bad fit.
I’d guess that he’d consider SF a fairly arbitrary model and not be surprised if an exponential fits better.
Why can’t the GDP be 0 or negative? Afghanistan and North Korea are right now exhibiting what such a country looks like: they can barely feed themselves and export so much violence or fundamentalism or other dysfunctionality that rich nations are sinking substantial sums of money into supporting them and fixing problems.
The argument would be that additional intelligence multiplies the per-capita wealth-producing apparatus that exists, rather than adding to it (or, in the smart fraction model, not doing anything once you clear a threshold).
There’s no restriction that b be positive, and so those are both options. I wouldn’t expect it to be negative because pre-industrial societies managed to survive, but that presumes that aid spending by the developed world is not subtracted from the GDP measurement of those countries. Once you take aid into account, then it does seem reasonable that places could become money pits.
That’s the intuitive justification for an exponential model (each additional increment of intelligence adds a percentage of the previous GDP), but I don’t see how this justifies looking at log transforms.
The difference would be a combination of negative externalities and changing Malthusian equilibriums: it has never been easier for an impoverished country like North Korea or Afghanistan to export violence and cause massive costs they don’t bear (9/11 directly cost the US something like a decade of Afghanistan GDP once you remove all the aid given to Afghanistan), and public health programs like vaccinations enable much larger populations than ‘should’ be there.
GDP ~ exp(IQ) is isomorphic to ln(GDP) ~ IQ, and I think log(dollars per year) is an easier unit to think about than something to the power of IQ.
[edit] The graph might look different, though. It might be instructive to compare the two, but I think the relationships should be mostly the same.
It’s worth pointing out that IQ numbers are inherently non-parametric: we simply have a ranking of performance on IQ tests, which are then scaled to fit a normal distribution.
If GDP ~ exp(IQ), that means that the correlation is better if we scale the rankings to fit a log-normal distribution instead (this is not entirely true because exp(mean(IQ)) is not the same as mean(exp(IQ)), but the geometric mean and arithmetic mean should be highly correlated with each other as well). I suspect that this simply means that GDP approximately follows a log-normal distribution.
This doesn’t quite follow, since both per capita GDP and mean national IQ aren’t drawn from the same sort of distribution as individual production and individual IQ are, but I agree with the broader comment that it is natural to think of the economic component of intelligence measured in dollars per year as lognormally distributed.
“Salt Iodization and the Enfranchisement of the American Worker”, Adhvaryu et al 2013:
If, in the 1920s, 10 IQ points could increase your labor participation rate by 1%, then what on earth does the multiplier look like now? The 1920s weren’t really known for their demands on intelligence, after all.
And note the relevance to discussions of technological unemployment: since the gains are concentrated in the low end (think 80s, 90s) due to the threshold nature of iodine & IQ, this employment increase means that already, a century ago, people in the low-end range were having trouble being employed.
A 2012 Jones followup: “Will the intelligent inherit the earth? IQ and time preference in the global economy”
This is related, but not the research talked about. The Terman Project apparently found that the very highest IQ cohort had many more patents than the lower cohorts, but this did not show up as massively increased lifetime income.
http://infoproc.blogspot.com/2011/04/earnings-effects-of-personality.html
Unless we want to assume those 4x extra patents were extremely worthless, or that the less smart groups were generating positive externalities in some other mechanism, this would seem to imply that the smartest were not capturing anywhere near the value they were creating—and hence were generating significant positive externalities.
EDIT: Jones 2011 argues much the same thing—economic returns to IQ are so low because so much of it is being lost to positive externalities.
On its own, I don’t consider this strong evidence for the greater productivity of the IQ elite. If they were contributions to open-source projects, that would be one thing. But people doing work that generates patents which don’t lead to higher income—that raises some questions for me. Is it possible that extremely high IQ is associated with a tendency to become “addicted” to a game like patenting? Added: I think Gwern and I agree more than many people might think reading this comment.
Open-source contribution is even more gameable than patents: at least with patents there’s a human involved, checking to some degree that there is at least a little new stuff in the patent, while no one and nothing stops you from putting a worthless repo up on Github reinventing wheels poorly.
The usual arrangement with, say, industrial researchers is that their employers receive the unpredictable dividends from the patents in exchange for forking over regular salaries in fallow periods...
I don’t see why you would privilege this hypothesis.
Let me put it this way. Before considering the Terman data on patents you presented, I already thought IQ would be positively correlated with producing positive externalities and that there was a mostly one way causal link from the former to the latter. I expected the correlation between patents and IQ. What was new to me was the lack of correlation between IQ and income, and the lack of correlation between patents and income. Correction added: there was actually a fairly strong correlation between IQ and income, just not between income and patents, (conditional on IQ I think). Surely more productive industrial researchers are generally paid more. Many firms even give explicit bonuses on a per patent basis. So for me, given my priors, the Terman data you presented shifts me slightly against correction: does not shift me for or against the hypothesis that at the highest IQ levels, higher IQ individuals continues to be associated with producing more positive externalities. ref Still, I think increasing people’s IQ, even the already gifted, probably has strong positive externalities unless the method for increasing it also has surprising (to me) side-effects.
I agree that measuring open-source contributions requires more than merely counting lines of code written. But I did want to highlight the fact that the patent system is explicitly designed to increase the private returns for a given innovation. I don’t think that there is a strong correlation between the companies/industries which are patenting the most, and the companies/industries, which are benefiting the world the most.
Yes, but the bonuses I’ve heard of are in the hundreds to thousands of dollars range, at companies committed to patenting like IBM. This isn’t going to make a big difference to lifetime incomes where the range is 1-3 million dollars although the data may be rich enough to spot these effects (and how many patents is even ‘4x’? 4 patents on average per person?), and I suspect these bonuses come at the expense of salaries & benefits. (I know that’s how I’d regard it as a manager: shifting risk from the company to the employee.)
And I think you’re forgetting that income did increase with each standard deviation by an amount somewhat comparable to my suggested numbers for patents, so we’re not explaining why IQ did not increase income whatsoever, but why it increased it relatively little, why the patenters apparently captured relatively little of the value.
Woh, I did allow myself to misread/misremember your initial comment a bit so I’ll dial it back slightly. The fact that even at the highest levels IQ is still positively correlated to income is important, and its what I would have expected, so the overall story does not undermine my support for the hypothesis that at the highest IQ levels, higher IQ individuals produce more positive externalities. I apologize for getting a bit sloppy there.
I would guess that if you had data from people with the same job description at the same company the correlation between IQ, patents, and income would be even higher.
Perhaps economic returns to IQ as so low because there are other skills which are good for getting economic returns, and those skills don’t correlate strongly with IQ.
Yes, this is consistent with the large income changes seen with some of the personality traits. If you have time, you could check the paper to see if that explains it: perhaps the highest cohort disproportionately went into academia or was low on Extraversion or something, or those subsets were entirely responsible for the excess patents.
3 more links:
“The Role of Cognitive Skills in Economic Development”
“The Role of School Improvement in Economic Development”
“An economic and rational choice approach to the autism spectrum and human neurodiversity” (Tyler Cowen)
If anyone is curious, I am moving my bibliography here to http://www.gwern.net/Embryo%20selection#value-of-iq and I will be keeping that updated in the future rather than continue this thread further.
How did they establish that economic gains are influenced by average IQ, rather than both being influenced by some other factor?
Sounds implausible to me, so I’m very interested in a citation (or pointers to similar material). If true, I’m going to have to do a lot of re-thinking.
Perhaps IQ correlates weakly with intelligence. If their are lots of people with high IQ, their are probably lots of intelligent people, but they’re not necessarily the same people. Hence, the countries with high IQ do well, but not the people.
I think you really need to see this google tech talk by Steven Hsu.
But naturally doing everything faster would be pretty freaking awesome in itself.
increased yearly economic growth (consequently higher average living standards since babies still take 9 months to make)
it would help everyone cram much more living into their lifespan.
it would help experts deal with events that aren’t sped up much better. Say an oil leak in the Gulf of Mexico.
medical advances would arrive earlier meaning that lots of people who would otherwise have died might live for a few more productive (sped up!) years.
But I’m having way to much fun nitpicking so I’ll just stop here. :)
Yes, especially this one.
Put differently, imagine a pill that made North Americans cognitively slower. Wouldn’t that be an obvious step down (for reasons symmetric to the ones you’ve highlighted)?
ISTM that there are lots of people who don’t want to cram more living into their lifespan, given the time they spend watching TV and stuff like that.
I think it would take more than a day for people to get possible good effects of the change.
A better memory might enable people to realize that they have made the same mistake several times. More processing power might enable them to realize that they have better strategies in some parts of their lives than others, and explore bringing the better strategies into more areas.
I’m not convinced. One very simple gain from
is the ability to consider more alternatives. These may be alternative explanations, designs, or courses of action. If I consider three alternatives where before I could only consider two, if the third one happens to be better than the other two, it is a real gain. This applies directly to the case of
Don’t confuse time-to-solution with correctness. Speed and the amount of facts at hand will not give you a good result if your fundamental assumptions (aka your algorithm) is wrong.
You cannot make up in quantity what you lose on each transaction, as the dot-com folks proved repeatedly.
--Steve Sailer
Careful now.
I don’t quite see how this is a Rationality Quote.
Tribal attire by another name.
Plato, Philebus
G.K. Chesterton
If I were a jelly fish,
Ya ha deedle deedle, bubba bubba deedle deedle dum.
All day long I’d biddy biddy bum.
If I were a jelly fish.
I wouldn’t have to work hard.
Ya ha deedle deedle, bubba bubba deedle deedle dum.
I prefer if I were a deep one.
(If you aren’t familiar with this song I strongly recommend one looks at all of Shoggoth on the Roof.)
A gentle introduction to the mythos.
Julian Huxley, Darwinism To-Day
A nod to Molière’s satirical line which coined the ‘dormitive fallacy’:
(Le Malade Imaginere (1673), Act III, sc. iii)
-Hippocrates
Considering the beast that some hope to kill by sharpening people’s mind-sticks on LW, this sounds applicable wouldn’t you agree?
Upvote for “mind-sticks”.
Agreed. Best analogy ever.
Why is a quote by a Greek, about whom our main sources are also Greek, being posted in Latin?
The saying “Ars longa, vita brevis” is a well known saying in my lanugage in its latin form. Seems to be the most common renderng in English as well.
Quidquid Latine dictum sit altum videtur.
(At the risk of ruining the joke: “Anything said in Latin sounds profound”)
Here’s the ancient greek version, to appease NihilCredo:
No puns, upvoted.
-- Aleister Crowley
I recently contemplated learning to play chess better (not to make an attempt at mastery, but to improve enough so I wasn’t so embarassed about how bad I was).
Most of my motivation for this was an odd signalling mechanism: People think of me as a smart person, and they think of smart people as people who are good at chess, and they are thus disappointed with me when it turns out I am not.
But in the process of learning, I realized something else: I dislike chess, as compared to say, Magic the Gathering, because chess is PURE strategy, whereas Magic or StarCraft have splashy images and/or luck that provides periodic dopamine rushes. Chess only is mentally rewarding for me at two moments: when I capture an enemy piece, or when I win. I’m not good enough to win against anyone who plays chess remotely seriously, so when I get frustrated, I just go capturing enemy pieces even though it’s a bad play, so I can at least feel good about knocking over an enemy bishop.
What I found most significant, though, was the realization that this fundamental not enjoying the process of thinking out chess strategies gave me some level of empathy for people who, in general, don’t like to think. (This is most non-nerds, as far as I can tell). Thinking about chess is physically stressful for me, whereas thinking about other kinds of abstract problems is fun and rewarding purely for its own sake.
My issue with chess is that the skills are non-transferable. As far as I can tell the main difference between good and bad players is memorisation of moves and strategies, which I don’t find very interesting and can’t be transferred to other more important areas of life. Whereas other games where tactics and reaction to situation is more important can have benefits in other areas.
I think the literature disagrees. E.g. good players are less prone to confirmation bias and I think that this is transferable. (Google Scholar would know better.) Introspectively I feel like playing chess makes me a better thinker. Chess is memorization of moves and strategies only in the sense that guitar is memorization of scales and chords. You need them to play well but they’re not sufficient.
True; see 2004 “Chess Masters’ Hypothesis Testing” Cowley & Bryne:
Well… The chess literature and general literature on learning rarely finds transfer. From the Nature coverage of that study:
Checking Google Scholar, I see only one apparent followup, the 2005 paper by the same authors, “When falsification is the only path to truth”:
While interesting and very relevant to some things (like programmers’ practice of ‘rubber ducking’ - explaining their problem to an imaginary creature), it doesn’t directly address chess transfer.
LW has put a lot of thought into the problem of akrasia, but nothing I can think of on how to induce more pleasure from thinking.
I think rationality helps to avoid making mistakes, and avoiding feeling unnecessarily bad, but not too much to the positive side of things.
I agree—pleasure in thinking might not be part of the study of rationality, but it could very much be part of raising sanity waterline.
Wow—I have a similar response to chess, but never drew that analogy. Thanks.
Learn to play Go, then even if your chess ability is lower, people won’t be able to judge your Go ability.
Go is roughly a game based on encircling the other’s army before his or her army encircles yours. A bit of thought about the meaning of the word ’encircle” should hint to how awesome that can be.
If your gaming heart has been more oriented towards WWII operational and strategic-level games, Go is the game for you. If chess incorporates the essence of WWI, Go is incorporates the essence of mobile warfare in WWII, if the part of the essence represented by Poker is removed.
Go=an abstraction of mobile warfare—Poker
Chess is battle, Go is war. I don’t see how it’s very much about mobility rather than scale.
What real scale and era, if any, is even roughly modeled?
Scott Boorman in The Protracted Game tried to model Mao with Go, and in particular, the anti-Japanese campaign in Manchuria. It was an interesting book. I’m not convinced that Go is a real analogy beyond beginner-level tactics, but he did convince me that Go modeled insurgencies much better than, say, Chess.
Chess: Battle of Chi Bi is exemplary. (I am not sure if that is at all informative to people who don’t already know a ridiculous amount about three kingdoms era China.) I don’t feel qualified to say anything about Go.
Why did you choose that battle? Subterfuge was prominent in it.
Chess may resemble some other pitched battles from before the twentieth century, but it doesn’t resemble modern war at all.
By subterfuge do you mean Huang Gai’s fire ships? I think of it more as a subtle pawn sacrifice which gets greedily accepted which allows for the invasion of Zhou Yu’s forces which starts a king hunt that forces Cao Cao to give up lots of material in the form of ships and would have resulted in his getting mated if he hadn’t a land to retreat to (and if he hadn’t gotten kinda lucky). I thought I remembered Pang Tong doing something interesting and symbolic somewhere in there (a counterattack on the opposite wing to draw away some of Cao Cao’s defending pieces) but I don’t remember if that was fictional or not.
This is an awesome quote that captures an important truth, the opposite of which is also an important truth :-) If I were choosing a vocation by the way its practicioners look and dress, I would never take up math or programming! And given how many people on LW are non-neurotypical, I probably wouldn’t join LW either. The desire to look cool is a legitimate desire that can help you a lot in life, so by all means go join clubs whose members look cool so it rubs off on you, but also don’t neglect clubs that can help you in other ways.
--Friedrich Nietzsche, The Birth of Tragedy (1872); cf. “Intellectual Hipsters and Meta-Contrarianism”
-- Oliver Cromwell
This has been mentioned in a few places on LW before (e.g. here) although I don’t know if it has been in a quotes thread.
Cromwell’s rule is neatly tied to that phrase.
(Rephrasing: “For the love of Cthulhu, take a second to notice that you might be confused.”)
-- Sergey Dovlatov
(translation is mine; can you propose a better translation from Russian?)
-- Dr. Dre, “The Watcher”
-- Scott Aaronson, Quantum Computing Since Democritus (http://www.scottaaronson.com/democritus/lec14.html)
It’s even more useful to you when they turn out to be right. (As happened to me with sailing upwind faster than the wind, and with Peter deBlanc’s 2007 theorem about unbounded utility functions.)
Reversed Stupidity?
In writing, I often notice that it’s easier to let someone else come up with a bad draft and then improve it—even if “improving” means “rewrite entirely”. Seeing a bad draft provides a basic starting point for your thoughts—“what’s wrong here, and how could it be done better”. Contrast this to the feeling of “there’s an infinite amount of ways by which I could try to communicate this, which one of them should be promoted to attention” that a blank paper easily causes if you don’t already have a starting point in mind.
You could explain the phenomenon either as a contraining of the search space to a more tractable one, or as one of the ev-psych theories saying we have specialized modules for finding flaws in the arguments of others. Or both.
Over in the other thread, Morendil mentioned that a lot of folks who have difficulty with math problems don’t have any good model of what to do and end up essentially just trying stuff out at random. I wonder if such folks could be helped by presenting them with an incorrect attempt to answer a problem, and then asking them to figure out what’s wrong with it.
Here are two excellent examples of what you just explained, as per the Fiction Identity Postulate:
*Doom, Consequences of Evil as the “bad draft”, and this as the done-right version.
*Same for this infuriating Chick Tract and this revisiting of it (it’s a Tear Jerker)
*And everyone is familiar with the original My Little Pony works VS the Friendship Is Magic continuity.
I don’t think so. In this context, it seems that Scott is talking about in this context making his mathematical intuitions more precise by trying to state explicitly what is wrong with the idea. He seems to generally be doing this in response to comments by other people sort of in his field (comp sci) or connected to his field (physics and math ) so he isn’t really trying to reverse stupidity.
People come up with ideas that are clearly and manifestly wrong when they’re confused about the reality. In some cases, this is just personal ignorance, and if you ask the right people they will be able to give you a solid, complete explanation that isn’t confused at all (evolution being a highly available example.)
On the other hand, they may be confused because nobody’s map reflects that part of the territory clearly enough to set them straight, so their confusion points out a place where we have more to learn.
It points to where the ripe bananas are, huh? Thanks, that was clarifying.
Seems more like harnessing motivated cognition, so long as opposite arguments aren’t privileged as counterarguments.
Reversed stupidity isn’t intelligence, but it’s not a bad place to start.
It is a bad place to start. The intended sense of “reversed” in “reversed stupidity” is that you pick the opposite, as opposed to retracting the decisions that led to privileging the stupid choice. The opposite of what is stupid is as arbitrary as the stupid thing itself, if you have considerably more than two options.
Vladimir is talking about reversed stupidity in the LW sense; but I don’t think it applies to cwillu’s quote. Asserting that a false statement is false is not “reversed stupidity”.
Not so, I can get very inventive trying to counter what I perceive as wrong or offensive. Disproving sources to offering countering and contradictory postulations; all are better when flung back. One of my great joys is when my snotty, off-hand comment makes someone go after real data to prove me wrong. If this is applied to some theoretical position, who knows where it could lead you. I’m pretty sure there is at least one Edison joke about this.
-Sam Harris
What about, I dunno, the protestant reformation, where people were persecuted for wanting, among other things, to read the bible themselves rather than have it interpreted for them by the priesthood?
What does it mean for a society to suffer?
-- Henry David Thoreau
(Though if a thousand people tried striking at the root at once they’d undoubtedly end up striking each other. (I wish there was something I could read that non-syncretically worked out analogies between algorithmic information theory and game-theory/microeconomics.))
That sounds awfully negative and I can’t see any basis for it apart from negativity. ie: For what basis do you declare that people striking the root are any more likely to strike each other than striking the branches?
While you might use the analogy to declare that the root of the problem is smaller, please note that there are trees (like Giant sequoias ) which have root systems that far outdistance the branch width.
If you picture the metaphorical great oak of malignancy with branches tens of yards in radius, and a trunk with roots (at the top of the trunk) only about 10 feet in diameter, you face one of those square of the distance problems in terms of axe swinging space.
This is what happens when you take the comments of romantic goofballs and slam them up against ontological rationalists who just might be borderline aspies or shadow autists.
I guess I should point out for the sake of clarity that the romantic goofball has not yet posted on this thread, and given the advanced interaction with entropy is unlikely to do so. Unless the Hindus, Buddhists and a few others are more accurate than the Catholics and Atheists.
-HL Menken
From an evolutionary perspective, I would have to disagree. Believing that one’s children are supremely cute; that one’s spouse is one’s soulmate; or even that an Almighty Being wants you to be fruitful and multiply—these are all beliefs which are a bit shaky on rationalist grounds but which arguably increase the reproductive fitness in the individuals and groups who hold them.
ERROR: POSTULATION OF GROUP SELECTION DETECTED
Barely, as an afterthought.
If you want to worry about hints of superstition look to the anthropomorphizing of TDT that is starting to crop up. This one was really scraping the bottom the barrel as far as dire yet predictable errors go.
I understand why group selection is problematic: Individual selection trumps it.
However, when group and individual selective pressure coincide, the mutation could survive to the point where it exists in a group at which point the group will have better fitness because of the group selective pressure.
Is this incorrect?
Don’t reverse stupidity too much: http://necsi.edu/research/evoeco/spatialpatterns.html (actual quantitative papers can be found by those who are interested; NECSI has some pretty cool stuff).
What is new here? It reads like the same old, wrong, group selection argument.
Huh? But, like, spatial patterns and shit. Okay, I’ll find something prestigious or something. Here’s a nice short position piece: http://www.necsi.edu/research/evoeco/nature08809_proof1.pdf Bam, Greek symbols and Nature, can’t argue with that.
ETA: Here’s a lot of fancy words and mathy shit: http://www.necsi.edu/research/multiscale/ . I don’t know how to read it but I do know that it agrees with my preconceptions, and whenever my intuition and Greek symbols align I know I’m right. It’s like astrology but better.
ETA2: Delicious pretty graphs and more Greek shit: http://www.necsi.edu/research/multiscale/PhysRevE_70_066115.pdf . Nothing to do with evolution but it’s so impressive looking that it doesn’t matter, right?
Whatever you’re trying to say, you aren’t helping it by your presentation. I mean:
Ordinarily that would be a rhetorical way of saying that you can and do argue with it (as do the authors of the paper that that was a response to), but you seem to be citing it in support of your previous comment. So, what is your actual point?
He knows, he’s Bruceing with his presentation.
Eh, sorta. (Voted up.) But I think the psychology is somewhat different. It’s like, “I’m going to be explicit about what signalling games I am participating in so that when you have contempt for me when I explicitly engage in them I get to feel self-righteous for a few seconds because I know that you are being hypocritical”. On the virtuous side, making things explicit is important for good accounting. Ideally I’d like to make it easier for me to be damned when I am truly unjustified. (I just wish there were wiser judges, better institutions than the ones I currently have access to.)
This comment exemplifies itself.
I see what you did there.
ETA: you didn’t need to edit to add “This comment exemplifies itself.”
Wow, it’s been a long time since someone chided me for pointing out the obvious! Heh. Point taken. (Sorry about editing after the fact, this almost never causes problems and is pretty useful but it does blow up once every 100 comments or so.)
I wasn’t chiding, only trying to prevent my comment from looking stupid.
After your edits: Do you have a problem with my question? It was clear and straightforward- I wanted to know what was new in the paper you linked. I was not trying to start some kind of status battle with you. I was not signaling anything. You indicated you had reason to believe previous findings on group selection were wrong- I asked you to explain the argument and you responded with what looks like rudeness and sarcasm. I don’t know if you were intending to direct that rudeness and sarcasm at me or if you’re just on a 48 hour Adderall binge. Either way, I suggest you take a nap.
It wasn’t directed at you at all; my sincere apologies for not making that clear. I don’t have a problem with your question. It was more like “ahhhh, despair, it would take me at least two minutes to think about how to paraphrase the relevant arguments, but I don’t have energy to do that, but I do want to somehow signal that it’s not just tired old group selection arguments because I don’t want NECSI to have been done injustice by my unwillingness to explain their ideas, but if I do that kind of signalling then I’m participating in a game that is plausibly in the reference class of propping up decision policies that are suboptimal, so I’ll just do it in a really weird way that is really discreditable so that I can get out of this double bind while still being able to say in retrospect that on some twisted level I at least tried to do the right thing.” ETA: Well, the double negative version of that which involves lots of fear of bad things, not desire for good things. I am not virtuous and have nothing to be humble about.
This is what Eliezer’s talking about in HP:MoR with:
I wish Dumbledore were made a steel man so he could give good counterarguments here rather than letting Harry win outright.
No need to dig up more sources- I just don’t know what the “spatial patterns and shit” means.
I’m not sure I understand your point. By way of example, do you agree that generally speaking, ultra-Orthodox Jews believe that it’s a good idea to have a lot of children and to pass this idea to their children?
And do you agree that the numbers of ultra-Orthodox Jews have increased dramatically over the last 100 years and are likely to continue increasing dramatically?
His complaint is from here:
Group selection doesn’t work. if you were to delete those two words, it would be fine, but if you start talking about increasing the reproductive fitness of a group as a whole, evolutionary biologists and other scientists will tend to dismiss what you say.
Well what exactly is “group selection”? If a group of people has a particular belief; and as a result of that belief, the group increases dramatically in numbers, would it qualify as “group selection”?
Conversely, if a group of people has a particular belief; and as a result of that belief, the group decreases dramatically in numbers, would it qualify as “group selection”?
It would not qualify. The ultra-Orthodox Jews example you give is of a set of individuals each pursuing their own fitness, and the set does well because each individual in the set does well. Group selection specifically refers to practices which make the group better off at individual cost. For example, if you had more daughters than sons, your group could grow faster- but any person in the group who defects and has more sons than daughters will reap massive benefits from doing so.
The moral of the story is, some people are oversensitive to “group” in the same sentence as “reproductive fitness.” Try to avoid it.
Well in that case, I was not talking about group selection. I was referring to a set of individuals each of whose reproductive fitness would be enhanced by the beliefs shared by him and the other members of the set of individuals.
I think that in normal discussions, it’s reasonable to refer to a set of individuals with shared beliefs as a “group.” And if those beliefs generally enhance the reproduction of the individuals in that group, it’s reasonable to state that the reproductive fitness in the group has been enhanced.
I suppose, but I think it was pretty clear from the context what I meant when I said that certain beliefs “arguably increase the reproductive fitness in the individuals and groups who hold them.” At a minimum, I think I deserve the benefit of the doubt.
I agree with you, as implied by my choice of “oversensitive” rather than “sensitive.”
Thanks, and for what it’s worth I do agree that group selection as you have defined it is vulnerable to defection by individuals.
Could you remove the “quoted text” part?
— Horatio__Caine on reddit
You could say that… puts on sunglasses … his competence killed him.
Cue music. yeahhh
Matthew (slightly paraphrased...)
What does this mean?
If you have good judgement about what things imply, you’ll be good at gathering evidence.
If you have poor judgement about what things imply, you’ll lose track of the meaning of the evidence you’ve got.
Let me see if I’ve cottoned on by coming up with an example.
Say you work with someone for years, and often on Mondays they come in late & with a headache. Other days, their hands are shaking, or they say socially inappropriate things in meetings.
“Good inductive bias” appears to mean you update in the correct direction (alcoholism/drug addiction) on each of these separate occasions, whereas “bad inductive bias” means you shrug each occurrence off and then get presented with each new occurrence, as it were, de novo. So this could be glossed as basically “update incrementally.” Have I got the gist?
I think what’s mildly confusing is the normatively positive use of the word “bias,” which typically suggests deviation from ideal reasoning. But I suppose it is a bias in the sense that one could go too far and update on every little piece of random noise...
“Inductive bias” is a technical term, where the word bias isn’t meant negatively.
I think that’s it, though there are at least two sorts of bad bias. The one you describe (nothing is important enough to notice or remember) is one, but there’s also having a bad theory (“that annoying person is aiming it all at me”, for example, which would lead to not noticing evidence of things going wrong which have nothing to do with malice).
This is reminding me of one of my favorite bits from Illuminatus!. There’s a man with filing cabinets [1] full of information about the first Kennedy assassination. He’s convinced that someday, he’ll find the one fact which will make it all make sense. He doesn’t realize that half of what’s he’s got is lies people made up to cover their asses.
In the novel, there were five conspiracies to kill JFK—but that character isn’t going to find out about them.
[1] The story was written before the internet.
It’s been far too long since I’ve heard this underlying point acknowledged! Thankyou!
-- Bertrand Russell, The Philosophy of Logical Atomism
~ William Johnson Cory
Georg Christoph Lichtenberg, via The Lichtenberg Reader: selected writings, trans. and ed. Franz H. Mautner and Henry Hatfield.
Sigmund Freud, The Future of an Illusion, part VI
Captain Tagon: Lt. Commander Shodan, years ago when you enlisted you asked for a job as a martial arts trainer.
Captain Tagon: And here you are, trying to solve our current problem with martial arts training.
Captain Tagon: How’s that saying go? “When you’re armed with a hammer, all your enemies become nails?”
Shodan: Sir,.. you’re right. I’m being narrow-minded.
Captain Tagon: No, no. Please continue. I bet martial arts training is a really, really useful hammer.
http://www.schlockmercenary.com/2010-03-08
“What I cannot create, I do not understand.”
-Richard Feynman
taken from wiki quotes which took it from Stephen Hawking’s book Universe in a Nutshell which took it from Feynman’s blackboard at the time of this death (1988)
its simple but it gets right at the heart of why the mountains of philosophy are the foothills of AI (as Eliezer put it) .
Francis Bacon, The advancement of Learning and New Atlantis
Julien Offray de La Mettrie, Man a machine, 1748
The Big Chill
Yitz Herstein
Banach, in a 1957 letter to Ulam.
-- Nick Tarleton
The original goes:
-- T. S. Eliot
Local optima of what function?
The Onion (it’s sort of a rationality and anti-rationality quote at multiple levels)
Richard P. Feynman
-- C.S. Peirce
Locke
I disagree. A lot of human conducts that I find virtuous, such as compassion or tolerance, have no immediate connection with the truth, and sometimes they are best served with white lies.
For example, all the LGBTQ propaganda spoken at doubting conservatives, about how people are either born gay or they aren’t, and how modern culture totally doesn’t make young people bisexual, no sir. We’re quite innocent, human sexuality is set in stone, you see. Do you really wish to hurt your child for what they always were? What is this “queer agenda” you’re speaking about?
Tee-hee :D
Um, this is both a strawman of what LGBTQ activists say and appears to seriously overestimate the degree to which a person has control over their sexual orientation.
I don’t think control as such is the issue, though; at least, that’s not how I read Multiheaded’s comment. It seems at least plausible that human sexuality is at least somewhat malleable to cultural inputs: even if no one consciously and explicitly says, “I hereby choose to be gay,” it could very well be that a gay-friendly culture results in more people developing non-straight orientations.
If nothing else, there are incentive effects: even if sexual orientation is fixed from birth, people’s behavior is regulated by cultural norms. Thus, we should expect that greater tolerance of homosexuality will lead to more homosexual behavior, as gays and people who are only marginally non-straight feel more free to act on their desires. For example, an innately bisexual person might engage entirely in heterosexual behavior in a society where homosexuality was heavily stigmatized, but engage in more homosexual behavior once the stigma is lifted.
Thus, conservatives who fear that greater tolerance of homosexuality will lead to more homosexual behavior are probably correct on this one strictly factual point, although I would expect the magnitude of the effect to be rather modest.
I don’t disagree with any of this. Most LGBTQ activists wouldn’t either. I used the hedging language “appears” because I don’t know for sure what kind of agency Multiheaded thinks people have over their sexuality.
Yeah, I meant something like that.
You may want to carefully consider this comment.
I can’t tell if you’re joking...
Dead serious actually. Well, what I mean is that a heteronormative approach where everyone must be either 6 or 1 on the Kinsey scale is hard to maintain in the modern world, and that when some extremely irrational older folks hate to see how young people can, for the first time in history, 1)discover their sexuality with some precision by using media and freely experimenting and 2)get a lot of happiness that way, it’s fine to spin a clean and simple tale of the subject matter to those sorry individuals.
… I like the way you talk. This goes a long way into explaining the same person saying “homosexuality is not a choice” and “I have been with qute a few straight guys”, as well as the treatment bi people get as “fence-sitters” and the resentment they generate by having an easier time in the closet.
I’m profoundly disappointed that this has been upvoted.
Could you elaborate on what you found objectionable?
Link.
-Seth Klarman, Margin of Safety, p.90
Friedrich Nietzsche
-- Mary Everest Boole
-- Robert Nozick (The Nature of Rationality)
Will Newsome on facebook ;)
-- HL Mencken
This is quoted already on this page albeit with “no matter” substituted for “however”.
I disagree, especially with the second part. For a trivial example, take the traditional refutation of Kantianism: You are hiding Jews in your house during WWII. A Nazi shows up and asks if you are hiding any Jews.
I’m going to have to call you on this one, in your trivial example you are intending harm/chaos/diversion to/to/of the Nazi plan. Causing disruption to another is vicious, even if you are being virtuous in your choice to disrupt.
Causing disruption is certainly vicious in the sense of aggressive or violent, yes. I, and apparently Normal_Anomaly, read the quote from Mencken as meaning that lying is vicious in the sense of immoral, ‘vice-ious’, and hence unjustifiable.
No, it is not.
vicious [vish-uhs]:
addicted to or characterized by vice; grossly immoral; depraved; profligate: a vicious life.
given or readily disposed to evil: a vicious criminal.
reprehensible; blameworthy; wrong: a vicious deception.
spiteful; malicious: vicious gossip; a vicious attack.
unpleasantly severe: a vicious headache.
Frank Schaeffer
Beware the fallacy of grey.
-- Richard P. Feynman
And oldy but goody.
.
Quite literally, in fact.
.
“Our present study is not, like other studies, purely theoretical in intention; for the object of our inquiry is not to know what virtue is but how to become good, and that is the sole benefit of it.” —Aristotle’s Nichomachean Ethics (translated by James E. C. Weldon; emphasis added)
-Anon.
-- Planet Sheen
“When anyone asks me how I can describe my experience of nearly forty years at sea, I merely say uneventful. Of course there have been winter gales and storms and fog and the like, but in all my experience, I have never been in an accident of any sort worth speaking about. I have seen but one vessel in distress in all my years at sea… I never saw a wreck and have never been wrecked, nor was I ever in any predicament that threatened to end in disaster of any sort.”
E.J. Smith, 1907, later captain of the RMS Titanic
Note: This is one of those comments that has been repeated, without citation, on the internet so many times that I can no longer find a citation.
I will submit (separately) three quotations from my favorite philosopher, C.S. Peirce:
-- C.S. Peirce
Crap … I appear to have screwed up something in the markdown syntax …
-- Jim Dator (“Dator’s Law”)
Strongly disagree with this quote. Some useful ideas about the future might seem ridiculous. But a lot won’t. Lots of new technologies and improvements are due to steady fairly predictable improvement of existing technologies. It might be true that a lot of useful ideas or the most useful ideas have a high chance of appearing to be ridiculous. But even that means we’re poorly calibrated about what is and is not reasonably doable. There’s also a secondary issue that the many if not most of the ideas which seem ridiculous turn out to be about as ridiculous as they seemed if not more so (e.g. nuclear powered aircraft which might be doable but will remain ridiculous for the foreseeable future) and even plausible seeming technologies often turn out not to work (such as the flying car). Paleo Future is a really neat website which catalogs predictions about the future especially in the form of technologies that never quite made it or failed miserably or the like. The number of ideas which failed is striking.
If there is a useful idea about the future which triggers no ridiculous or improbable filters, doesn’t that imply many people will have already accepted that idea, using it and removing the profit from knowing it? To make money, you need an edge; being able to find ignored gems in the ‘possible ridiculous futures’ sounds like a good strategy.
Not necessarily. For example, it could be that no one had thought of the idea in question but once someone thought of the idea the usefulness is immediately obvious.
Sure, but that implies a rather inefficient market—not even exploring major possibilities! Wouldn’t work on Wall Street, I don’t think.
An idea can still be useful even if everyone else knows about it too. Life isn’t a zero-sum game.
Like this one?
“Although nature commences with reason and ends in experience it is necessary for us to do the opposite, that is to commence with experience and from this to proceed to investigate the reason.”
-Leonardo da Vinci
“Communication usually fails, except by accident”—Osmo Wiio
“Communication” here has a different definition from the usual one. I interpreted it as meaning the richness of your internal experiences and the intricate web of associations are conjured in your mind when you say even a single word.
-- Raymond Terrific
-- Cliff Pervocracy
Anthony Quinn Stanley
I think I prefer Nietzsche’s version…
-- C.S. Peirce
-- Upton Sinclair
Dupe
Forgot to google it. Sorry.
-- Jeffrey Lewis, If Life Exists, which is really about set point happiness
Henri L. Bergson—The Creative Mind: An Introduction to Metaphysics, p. 218
ETA: retracted. I posted this on the basis of my interpretation of the first sentence, but the rest of the quote makes clear that my interpretation of the first sentence was incorrect, and I don’t believe it belongs in a rationality quotes page anymore.
What?
Quite. Bergson might not reach the same level of awfulness as the examples David Stove pillories, but I couldn’t penetrate the fog of this paragraph, not even with the context. I think Wikipedia nails the jelly to the wall, though: Bergson argued that “immediate experience and intuition are more significant than rationalism and science for understanding reality”. In which case, −1 to Bergson. I learn from the article that Bergson also coined the expression élan vital.
James Clerk Maxwell
I am having difficulty parsing this. The easiest interpretation to make of the first part seems to be “There are no laws of matter except the ones we make up,” and the second part is saying either “minds are subject to physics” or something I don’t follow at all.
I interpret the first part as saying that there are no laws of matter other than ones our minds are forced to posit (forced over many generations of constantly improving our models). And the second part is something like “minds are subject [only] to physics”, as you said. The second part explains how and why the first part works.
Together, I interpret them as suggesting a reductive physicalist interpretation of mind (in the 19th century!) according to which our law-making is not only about the universe but is itself the universe (or a small piece thereof) operating according to those same laws (or other, deeper laws we have yet to discover).
-- Jean-Paul Sartre, Nausea
-Whitbreads Fyunch(click), by Larry Niven & Jerry Pournelle in “The Mote in God’s Eye”.
This doesn’t really comment that Whitbreads may have incomplete evidence, facts, bias or his own aims.
For me it runs more along the lines of Aumann`s agreement theorem.
-Solid Snake, Metal Gear Solid 2: Sons of Liberty
In other words, have no heroes, and no villains.
-- Richard Dawkins, The Selfish Gene
(I know it’s old and famous and classic, but this doesn’t make it any less precious, does it?)
Sometimes I suspect that wouldn’t even occur to them as a question. That evolution might turn out to be one of those things that it’s just assumed any race that had mastered agriculture MUST understand.
Because, well, how could a race use selective breeding, and NOT realise that evolution by natural selection occurs?
Easily.
Realizing far-reaching consequences of an idea is only easy in hindsight, otherwise I think it’s a matter of exceptional intelligence and/or luck. There’s an enormous difference between, on the one hand, noticing some limited selection and utilising it for practical benefits—despite only having a limited, if any, understanding of what you’re doing—and on the other hand realizing how life evolved into complexity from its simple beginnings, in the course of a difficult to grasp period of time. Especially if the idea has to go up against well-entrenched, hostile memes.
I don’t know if this has a name, but there seems to exit a trope where (speaking broadly) superior beings are unable to understand the thinking and errors of less advanced beings. I first noticed it when reading H. Fast’s The First Men, where this exchange between a “Man Plus” child and a normal human occurs:
“Can you do something you disapprove of?” “I am afraid I can. And do.” “I don’t understand. Then why do you do it?”
It’s supposed to be about how the child is so advanced and undivided in her thinking, but to me it just means “well then you don’t understand how the human mind works”.
In short, I find this trope to be a fallacy. I’d expect an advanced civilisation to have a greater, not lesser, understanding of how intelligence works, its limitations, and failure modes in general.
Yeah. This was put very well by Fyodor Urnov, in an MCB140 lecture:
“What is blindingly obvious to us was not obvious to geniuses of ages past.”
I think the lecture series is available on iTunes.
But what reason do we have to expect them to pick evolution, as opposed to the concept of money, or of extensive governments (governments governing more than 10,000 people at once), or of written language, or of the internet, or of radio communication, or of fillangerisation, as their obvious sign of advancement?
Just because humans picked up on evolution far later than we should have, doesn’t mean that evolution is what they’ll expect to be the late discovery. They might equally expect that the internet wouldn’t be invented until the equivalent tech level of 2150. Or they might consider moveable type to be the symbol of a masterful race.
Just because they’ll likely be able to understand why we were late to it, doesn’t mean it would occur to them before looking at us. It’s easy to explain why we came to it when we did, once you know that that’s what happened, but if you were from a society that realised evolution [not necessarily common descent] existed as they were domesticating animals; would you really think of understanding evolution as a sign of advancement?
EDIT: IOW: I’ve upvoted your disagreement with the “advanced people can’t understand the simpler ways” trope; but I stand by my original point: they wouldn’t EXPECT evolution to be undiscovered.
I suspect that the intent of the original quote is that they’ll assess us by our curiosity towards, and effectiveness in discovering, our origins. As Dawkins is a biologist, he is implying that evolution by natural selection is an important part of it, which of course is true. An astronomer or cosmologist might consider a theory on the origins of the universe itself to be more important, a biochemist might consider abiogenesis to be the key, and so on.
Personally, I can see where he’s coming from, though I can’t say I feel like I know enough about the evolution of intelligence to come up with a valid argument as to whether an alien species would consider this to be a good metric to evaluate us with. One could argue that interest in oneself is an important aspect of intelligence, and scientific enquiry important to the development of space travel, and so a species capable of travelling to us would have those qualities and look for them in the creatures they found.
This is my time posting here, so I’m probably not quite up to the standards of the rest of you just yet. Sorry if I said something stupid.
Welcome to lesswrong.
I wouldn’t consider anything you’ve said here stupid, in fact I would agree with it.
I, personally, see it as a failure of imagination on the part of Dawkin’s, that he considers the issue he personally finds most important to be that which alien intelligences will find most important, but you are right to point out what his likely reasoning is.
I think you’re interpreting the quote too literally, it’s not a statement about some alien intelligences but an allegory to communicate just how important the science of evolution is.
Another chain of reasoning I have seen people use to reach similar conclusions is that the aliens are looking for species that have outgrown their sense of their own special importance to the universe. Aliens checking for that would be likely to ask about evolution, or possibly about cosmologies that don’t have the home planet at the center of the universe. However, I don’t think a sense of specialness is one of the main things aliens will care about.
Have you never looked at something someone does and asked yourself, “How can they be so stupid?”
It’s not as though you literally cannot conceive of such limitations; just that you cannot empathize with them.
It’s anthropomorphism to assume that it would occur to advanced aliens to try to understand us empathetically rather than causally/technically in the first place, though.
Anthropomorphism? I think not. All known organisms that think have emotions. Advanced animals demonstrate empathy.
Now, certainly it might be possible that an advanced civilization might arise that is non-sentient, and thus incapable of modeling other’s psyche empathetically. I will admit to the possibility of anthropocentrism in my statements here; that is, in my inability to conceive of a mechanism whereby technological intelligence could arise without passing through a route that produces intelligences sufficiently like our own as to possess the characteristic of ‘empathy’.
It’s one thing to postulate counter-factuals; it’s another altogether to actually attempt to legitimize them with sound reasoning.
Do you have any good evidence that this assertion applies to Cephalopods? I.e., either that they don’t think or that they have emotions. (Not a rhetorical question; I know about them only enough to realize that I don’t know.)
Cephalopods in general have actually been shown to be rather intelligent. Some species of squid even engage in courtship rituals. There’s no good reason to assume that given the fact that they engage in courtship, predator/prey response, and have been shown to respond to simple irritants with aggressive responses that they do not experience at the very least the emotions of lust, fear, and anger.
(Note: I model “animal intelligence” in terms of emotional responses; while these can often be very sophisticated, it lacks abstract reasoning. Many animals are more intelligent beyond ‘simple’ animal intelligence; but those are the exception rather than the norm.)
I agree, but I’m not sure the examples you gave are good reasons to assume the opposite. They’re certainly evidence of intelligence, and there are even signs of something close to self-awareness (some species apparently can recognize themselves in mirrors).
But emotions are a rather different thing, and I’m rather more reluctant to assume them. (Particularly because I’m even less sure about the word than I am about “intelligence”. But it also just occurred to me that between people emotions seem much easier to fake than intelligence, which stated the other way around means we’re much worse at detecting them.)
Also, the reason I specifically asked about Cephalopods is that they’re pretty close to as far away from humans as they can be and still be animals; they’re so far away we can’t even find fossil evidence of the closest common ancestor. It still had a nervous system, but it was very simple as far as I can tell (flatworm-level), so I think it’s pretty safe to assume that any high level neuronal structures have evolved completely separately between us and cephalopods.
Which is why I’m reluctant to just assume things like emotions, which in my opinion are harder to prove.
On the other hand, this means any similarity we do find between the two kinds of nervous systems (including, if demonstrated, having emotions) would be pretty good evidence that the common feature is likely universal for any brain based on neurons. (Which can be interesting for things like uploading, artificial neuronal networks, and uplifting.)
While I think you’re right to point out that the uncomprehending-superior-beings trope is unrealistic, I don’t think Dawkins was generalizing from fictional evidence; his quote reads more to me like plain old anthropomorphism, along with a good slice of self-serving bias relating to the importance of his own work.
A point similar to your first one shows up occasionally in fiction too, incidentally; there’s a semi-common sci-fi trope that has alien species achieving interstellar travel or some other advanced technology by way of a very simple and obvious-in-retrospect process that just happened never to occur to any human scientist. So culture’s not completely blind to the idea. Both tropes basically exist to serve narrative purposes, though, and usually obviously polemic ones; Dawkins isn’t any kind of extra-rational superhuman, but I wouldn’t expect him to unwittingly parrot a device that transparent out of its original context.
The British agricultural revolution involved animal breeding starting in about 1750. Darwin didn’t publish Origin of Species until 1859, so in reality it took about 100 years for the other shoe to drop.
100 years is nothing in the evolution of a civilization though. The time between agricultural revolution and the discovery of evolution is not a typical period in the history of humanity.
Selective breeding had been around much longer than that.
Selective breeding isn’t necessarily the same as artificial selection, however. The taming of dogs and cats was largely considered accidental; the neotenous animals were more human-friendly and thus able to access greater amounts of food supplies from humans until eventually they could directly interact, whereupon (at least in dogs) “usefulness” became a valued trait.
There wasn’t purposefulness in this; people just fed the better dogs more and disliked the ‘worse’ dogs. It wasn’t until the mid-1700′s that dog ‘breeds’ became a concept.
There were certainly attempts to breed specific traits earlier than that. But they were hindered by a poor understanding of inheritance. For example, in the Bible, Jacob tried to breed speckled cattle by putting speckled rods in front of the cattle when they are trying to mate. Problems with understanding genetics works at a basic level was an issue even for much later and some of them still impact what are officially considered purebreds now.
I think that deliberate breeding of stronger horses dates back prior to the 1700s, at least to the early Middle Ages, but I don’t have a source for that.
Absolutely. Even the dog-breeding practitioners were unaware of how inheritence operates; that didn’t come about until Gregor Mendel. We really do take for granted the vast sums of understanding about the topic we are inculcated with simply through cultural osmosis.
If I were an intelligent creature from space visiting Earth, I’d probably start by asking, “do they have anything that can shoot us out of orbit ?” That’s just me though.
I wouldn’t say it has much preciousness to begin with. It is is nearly nonsensical cheering. The sort of thing I don’t like to associate myself with at all.
I would actually think evolution a particularly poor choice.
If you want to pick one question to ask (and if we leave aside the obvious criterion of easy detectability from space) then you would want to pick one strongly connected in the dependency graph. Heavier than air flight, digital computers, nuclear energy, the expansion of the universe, the genetic code, are all good candidates. You can’t discover those without discovering a lot of other things first.
But Aristotle could in principle have figured out evolution. The prior probability of doing so at that early stage may be small, but I’ll still bet evolution has a much larger variance in its discovery time than a lot of other things.
This is a good one. I like it.
Seems dependent on substitute energy availability and military technology.
There seems to be significant variance in how much humans care about such things, and achievement depends significantly on interest. Would aliens care at all about this?
I think we would do quite poorly with any one such question and exponentially better if permitted a handful.
cringe. Please don’t use “exponentially” to mean a lot when you have only two data points.
I mean we’d do more than twice as well with one question than with two, and more than twice as well with three than with two. Usually, diminishing returns leads us to learn less from each additional question, but not here. How do I express that?
I have zero data points, I’m comparing hypothetical situations in which I ask aliens one or more questions about their technology. (It seems Dawkins’ scenario got inverted somewhere along the way, but I don’t think that makes any difference.)
That’s actually a claim of superexponential growth, but how you said it sounds ok. I’m actually not sure that you can get superexponential growth in a meaningful sense. If you have n bits of data you can’t do better than having all n bits be completely independent. So if one is measuring information content in a Shannon sense one can’t do better than exponential.
Edit: If this is what you want to say I’d say something like “As the number of questions asked goes up the information level increases exponentially” or use “superexponentially” if you mean that.
My best guess for each individual achievement gets better each other achievement I learn about, as they are not independent.
I was trying to get at the legitimacy of summarizing the aggregate of somewhat correlated achievements as a “level of civilization”. Describing a civilization as having a a “low/medium/high/etc. level of civilization” in relation to others depends on either its technological advances being correlated similarly or establishing some subset of them as especially important. I don’t think the latter can be done much, which leaves inquiring about the former.
If the aliens are sending interstellar ships to colonize nearby systems, have no biology or medicine, have no nuclear energy or chemical propulsion (they built a tower on their low gravity planet and launched a solar sail based craft from it with the equivalent of a slingshot for their space program), and have quantum computers, they don’t have a level of technology.
Well what does no medicine mean? A lot of medicine would work fine without understanding genetics in detail. Blood donors and antibiotics are both examples. Also do normal computers not count as technology? Why not? Assume that we somehow interacted with an alien group that fit your description. Is there nothing we could learn from them? I think not. For one, they might have math that we don’t have. They might have other technologies that we lack (for example, better superconductors). You may be buying into a narrative of technological levels that isn’t necessarily justified. There are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense. For example, one-time pads arose in the late 19th century, but would have made sense as a useful system on telegraphs 20 or 30 years before. Another example are high-temperature superconductors. Similarly, high temperature superconductors (that is substances that are superconductors at liquid nitrogen temperatures) were discovered in the mid 1980s but the basic constructions could have been made twenty years before.
No blood donors (if they have blood), no antibiotics (if they have bacteria), etc.
Of course they do.
We could learn a lot from them, but it would be wrong to say “The aliens have a technological level less than ours”, “The aliens have a technological level roughly equal to ours”, “The aliens have a technological level greater than ours”, or “The aliens have a technological level, for by technological levels we can most helpfully and meaningfully divide possible-civilizationspace”.
My point is that there are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense, so asking about what technologies have arisen isn’t as informative as one might intuitively suspect. It’s so uninformative that the idea of levels of technology is in danger of losing coherence as a concept absent confirmation from the alien society that we can analogize from our society to theirs, confirmation that requires multiple data points.
Ah, I see. Yes that makes sense. No substantial disagreement then.
I heard a Calculus teacher do this with even less justification a few days ago.
EDIT: was this downvoted for irrelevancy, or some other reason?
I didn’t downvote it, but if you notice, JoshuaZ concluded my use of “exponential” was “ok”, as what I actually meant was not “a lot” but rather what is technically known as “superexponential growth”.
“Even less justification” has some harsh connotations.
Very much agreed.
I also agree with:
I agree with the general idea of:
though I think it is hard to correctly choose according to this criterion. I’m skeptical that digital computers would really pass this test. Considering the medium that we are all using to discuss this, we might be a bit biased in our views of their significance. (as a former chemist, I’m biased towards picking the periodic table—but I know I’m not making a neutral assessment here.)
Nuclear energy seems like a decent choice, from the dependency graph point of view. A civilization which is able to use either fission or fusion has to pass a couple of fairly stringent tests. To detect the relevant nuclear reactions in the first place, they need to detect Mev particles, which aren’t things that everyday chemical or biological processes produce. To get either reaction to happen on a large scale, they must recognize and successfully separate isotopes, which is a significant technical accomplishment.
Is it possible the right isotopes might be lying around? Like here, but more concentrated and dispersed?
Yes, good point, if intelligent life evolved faster on their planet. The relevant timing is how long it took after the supernova that generated the uranium for the alien civilization to arise. (since that sets the 238U/235U ratio).
I’m confused. I thought a reaction needed a quantity of 235U in an area, and that smaller areas needed more 235U to sustain a chain reaction. Wouldn’t very small pieces of relatively 235U rich uranium be fairly stable? One could then put them together with no technological requirements at all.
You are quite correct, small pieces of 235U are stable. The difference is that low concentrations of 235U in natural uranium (because of it’s faster decay than 238U) make it harder to get to critical mass, even with chemically pure (but not isotopically pure) uranium. IIRC, reactor grade is around 5% 235U, while natural uranium is 0.7%. IIRC, pure natural uranium metal, at least by itself, doesn’t have enough 235U to sustain a chain reaction, even in a large mass. (but I vaguely recall that the original reactor experiment with just the right spacing of uranium metal lumps and graphite moderator may have been natural uranium—I need to check this… (short of time right now)) (I’m still not quite sure—Chicago Pile-1 is documented here but the web page described the fuel as “uranium pellets”. I think they mean natural uranium, in which case I withdraw my statement that isotope separation is a prerequisite for nuclear power.)
I think this is correct but finding a source which says that seems to be tough. However, Wikipedia does explicitly confirm that the successor to CP1 did initially use unenriched uranium.
Edit: This article (pdf) seems to confirm it. They couldn’t even use pure uranium but had to use uranium oxide. No mention of any sort of enrichment is made.
Yes, CP-1 used natural uranium (~0.7% U-235) and ultra high purity graphite. It would become impossible to attain without isotope separation in just a few hundred million years, to add to the billions from the formation of uranium in the star. Conversely, 1.7 billions years ago, it occurred naturally, with regular water to slow down neutrons.
Fusion is more interesting.
What is natural is something that I, without background other than a history of nuclear weapons class for my history degree, was/am not confident wouldn’t vary from solar system to solar system.
The natural reactor ended up with less U235 than normal, decayed uranium because some of the fuel had been spent. I assume that it began with either an unusual concentration of regular uranium (or other configuration of elements that slowed neutrons or otherwise facilitated a reaction) or that the uranium there was unusually rich in 235U. If it was the latter, I don’t know the limits for how rich in 235U uranium could be at time of seeding into a planet, but no matter the richness, having small enough pieces would preserve it for future beings. Richness alone wouldn’t cause a natural reaction, so to the extent richness can vary, it can make nuclear technology easy.
If the natural reactor had average uranium, and uranium on planets wouldn’t be particularly more 235U rich than ours, then nuclear technology’s ease would be dependent on life arising quickly relative to ours, but not fantastically so, as you say.
Genetic code might likely vary. While it isn’t implausible that other life would use DNA for its genetic storage it doesn’t seem to be that likely. It seems extremely unlikely that DNA would be organized in the same triplet codon system that life on Earth uses.
Heavier than air flight is also a function of what sort of planet you are on. If Earth had slightly weaker or stronger gravity the difficulty of this achievement would change a lot. Also if intelligent life had arose from winged species one could see this as impacting how much they study aerodynamics and the like. One could conceive of that going either way (say having a very intuitive understanding of how to fly but considering it to be incredibly difficult to make an Artificial Flyer, or the opposite, using that intuition to easily understand what would need to be done in some form.)
Other than that, your argument seems to be a good one.
I wonder if there’s any way to estimate how hard it is for an intelligent species to think of evolution. It’s a very abstract theory, and I think it’s plausible that intelligent species could be significantly better or worse than we are at abstract thought. I have no idea where the middle of the bell curve (if it’s a bell curve at all) would be.
But it does, no?
Sorry, but I don’t understand you.
Using the outside view tells you something about your particular case. If you don’t know how long your project is going to take, and then someone tells you that such projects normally take ten months, then you’ve learned something about how long your project will take. It will take about ten months. Your quote is the kind of thing that people say because they think their project is special in some way. They’re trying to fight the outside view. But most of the time their project isn’t special, it will take just as long as everyone else.
The outside view informs judgements of particular cases.
Alright, I will add a bit more context:
From Timid Choices and Bold Forecasts A Cognitive Perspective on Risk Taking by Daniel Kahneman and Dan Lovallo
Ah, that makes much more sense. Thanks.
.
Sorry, I don’t understand what this quote is trying to say. I’ve attempted to parse it and can sort get some sort of thing about not caring what the truth is. If that’s the meaning then it seems to be pretty anti-rationalist. What am I missing?
.
I found Campbell’s The Hero with a Thousand Faces not very convincing. The similarities he sees between folk stories are often rather trivial, I think, and the rubbery nature of human language makes it easy—not even mentioning selection bias.
Is The Power of Myth better?
.
Greg Egan’s short story “The Planck Dive” has an interesting take on that subject. It’s about a mythologist trying to force a description of a post-Singularity scientific expedition into one of the classic mythical narratives.
It’s not “post-Singularity”, it’s normal human technology, just more advanced.
I guess you could say that. I said “post-Singularity” because all the characters are uploads, but there aren’t any AGIs and human nature isn’t unrecognizably different.
An example of a well-known non-trivial similarity would be the flood-myths that many cultures have—it seems that least some of those myths are related somehow—but not in inherited psycho-analytical way (!) that Campbell suspects, but more likely simply due to copying the stories (e.g. Noah, Gilgamesh).
“LANGUAGE IS MORE THAN BLOOD’
-- Franz Rosenzweig, quoted in the book “Language of the Third Reich; a Philologist’s Notebook” by Holocaust survivor Victor Klemperer
Huh? Unless you are quoting from a fantasy story with an unusual magic system then I have no idea what you are talking about.
Then → than?
Language is more than blood… more powerful than blood? I recognise “Language of the Third Reich”, it was a study on how language (most notably alien and eternal) was used to alter perceptions during the Third Reich’s reign. Maybe this quote means language can turn blood relatives against each other? Or that language can dehumanise a person to the point that seeing them die (their blood spilled?) doesn’t bother someone?
Yeah, I got nothing either.
It is poetry. Given the context, it is a sentence which stresses the importance of language, to reflect and language and to use it properly. Language has grave consequences.