Rational Me or We?
Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment. If you expect to find yourself unarmed in a dark alley, or fighting hand to hand in a war, it makes sense. But most people do a lot better at ensuring their personal security by coordinating to live in peaceful societies and neighborhoods; they pay someone else to learn martial arts. Similarly, while “survivalists” plan and train to stay warm, dry, and fed given worst case assumptions about the world around them, most people achieve these goals by participating in a modern economy.
The martial arts metaphor for rationality training seems popular at this website, and most discussions here about how to believe the truth seem to assume an environmental worst case: how to figure out everything for yourself given fixed info and assuming the worst about other folks. In this context, a good rationality test is a publicly-visible personal test, applied to your personal beliefs when you are isolated from others’ assistance and info.
I’m much more interested in how we can can join together to believe truth, and it actually seems easier to design institutions which achieve this end than to design institutions to test individual isolated general tendencies to discern truth. For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics. We don’t each need to train to identify and fix each possible kind of bias; each bias can instead have specialists who look for where that bias appears and then correct it.
Perhaps martial-art-style rationality makes sense for isolated survivalist Einsteins forced by humanity’s vast stunning cluelessness to single-handedly block the coming robot rampage. But for those of us who respect the opinions of enough others to want to work with them to find truth, it makes more sense to design and field institutions which give each person better incentives to update a common consensus.
- Don’t Revere The Bearer Of Good Info by 21 Mar 2009 23:22 UTC; 126 points) (
- 27 May 2012 23:11 UTC; 50 points) 's comment on The rational rationalist’s guide to rationally using “rational” in rational post titles by (
- Rationalist Storybooks: A Challenge by 18 Mar 2009 2:25 UTC; 39 points) (
- 3 Dec 2015 7:10 UTC; 33 points) 's comment on LessWrong 2.0 by (
- Rational Groups Kick Ass by 25 Apr 2009 2:37 UTC; 33 points) (
- Say More, Justify Less by 14 Apr 2011 22:41 UTC; 28 points) (
- Individual Rationality Is a Matter of Life and Death by 21 Mar 2009 19:22 UTC; 26 points) (
- 25 Mar 2009 2:57 UTC; 11 points) 's comment on Contests vs. Real World Problems by (
- Less Wrong: Progress Report by 24 Apr 2009 23:49 UTC; 5 points) (
- Be Logically Informative by 15 May 2009 13:23 UTC; 4 points) (
- 23 Feb 2012 20:51 UTC; 0 points) 's comment on Unable to post article (probably because of excessive/incompatible formatting) by (
- Externally Oriented, Love, Sensational. Which Rationality will LW be about? by 2 Jan 2011 22:22 UTC; -2 points) (
- Externally Oriented, Love, Sensational. Which Rationality will LW be about? by 2 Jan 2011 22:18 UTC; -3 points) (
Robin Hanson has identified a breakdown in the metaphor of rationality as martial art: skillful violence can be more or less entirely deferred to specialists, but rationality is one of the things that everyone should know how to do, even if specialists do it better. Even though paramedics are better trained and equipped than civilians at the scene of a heart attack, a CPR-trained bystander can do more to save the life of the victim due to the paramedics’ response time. Prediction markets are great for governments, corporations, or communities, but if an individual’s personal life has gotten bad enough to need the help of a professional rationalist, a little training in “cartography” could have nipped the problem in the bud.
To put it another way, thinking rationally is something I want to do, not have done for me. I would bet that Robin Hanson, and indeed most people, respect the opinions of others in proportion to the extent that they are rational. So the individual impulse toward learning to be less wrong is not only a path to winning, but a basic value of a rationalist community.
One can think that individuals can profit from being more rational, while also thinking that improving our social epistemic systems or participating in them actively will do more to increase our welfare than focusing on increasing individual rationality.
Another thing that you must do for yourself is politics; sadly EY is right that we can’t start discussing that here.
Yes, it would be silly to think of ourselves as isolated survivalists in a society where so many people are signed up for cryonics, where Many-Worlds was seen as retrospectively obvious as soon as it was proposed, and no one can be elected to public office if they openly admit to believing in God. But let us be realistic about which Earth we actually live in.
I too am greatly interested in group mechanisms of rationality—though I admit I put more emphasis on individuals; I suspect you can build more interesting systems out of smarter bricks. The obstacles are in many ways the same: testing the group, incentivizing the people in it. In most cases if you can test a group you can test an individual and vice versa.
But any group mechanism of that sort will have the character of a band of survivalists getting together to grow carrots. Prediction markets are lonely outposts of light in a world that isn’t so much “gone dark” as having never been illuminated to begin with; and the Policy Analysis Markets were burned by a horde of outraged barbarians.
We have always been in the Post-Apocalyptic Rationalist Environment, where even scientists and academics are doing it wrong and Dark Side Epistemology howls through the street; I don’t even angst about this, I just take it for granted. Any proposals for getting a civilization started need to take into account that it doesn’t already exist.
Sounds like you do think of yourself as an isolated survivalist in a world of aliens with which you cannot profitably coordinate. Let us know if you find those more interesting systems you suspect can be built from smarter bricks.
It’s pretty hard to be isolated in a world of six billion people. The more key question is the probability of coordinating with any randomly selected person on a rationalist topic of fixed difficulty, and the total size of the community available to support some number of institutions.
To put it bluntly, if you built the ideal rationalist institution that requires one million supporters, you’d be in trouble because the 99.98th percentile of rationality is not adequate to support it (and also such rationalists may have other demands on their time).
But if you can build institutions that grow starting from small groups even in a not-previously-friendly environment, or upgrade rationalists starting from the 98th percentile to what we would currently regard as much higher levels, then odds look better for such institutions.
We both want to live in a friendly world with lots of high-grade rationalists and excellent institutions with good tests and good incentives, but I don’t think I already live there.
Even in the most civilized civilizations, barbarity takes place on a regular basis. There are some homicides in dark alleys in the safest countries on earth, and there are bankruptcies, poverty, and layoffs even in the richest countries.
In the same way, we live in a flawed society of reason, which has been growing and improving with starts and fits since the scientific revolution. We may be civilized in the arena of reason in the same way you could call Northern Europe in the 900s civilized in the arena of personal security: there are rules that nearly everyone knows and that most obey to some extent, but they are routinely disrespected, and the only thing that makes people really take heed is the theater of enforcement, whether that’s legally-sanctioned violence against notorious bandits or a dressing-down of notorious sophists.
Right now, we are only barely scraping together a culture of rationality, it may have a shaky foundation and many dumber bricks, but it seems a bit much to say we don’t have one.
Let me us distinguish “truth-seekers”, people who respect and want truth, from “rationalists”, people who personally know how to believe truth. We can build better institutions that produce truth if only we have enough support from truth-seekers; we don’t actually need many rationalists. And having rationalists without good institutions may not produce much more shared accessible truth.
I’m not sure I can let you make that distinction without some more justification.
Most people think they’re truth-seekers and honestly claim to be truth-seekers. But the very existence of biases shows that thinking you’re a truth-seeker doesn’t make it so. Ask a hundred doctors, and they’ll all (without consciously lying!) say they’re looking for the truth about what really will help or hurt their patients. But give them your spiel about the flaws in the health system, and in the course of what they consider seeking the truth, they’ll dismiss your objections in a way you consider unfair. Build an institution that confirms your results, and they’ll dismiss the institution as biased or flawed or “silly”. These doctors are not liars or enemies of truth or anything. They’re normal people whose search for the truth is being hijacked in ways they can’t control.
The solution: turn them into rationalists. They don’t have to be black belt rationalists who can derive Bayes’ Theorem in their sleep, but they have to be rationalist enough that their natural good intentions towards truth-seeking correspond to actual truth-seeking and allow you to build your institutions without interference.
“The solution: turn them into rationalists.”
You don’t say how to accomplish this. Would it require (or at least benefit greatly from) institutional change?
I had in mind that you might convince someone abstractly to support eg prediction markets because they promote truth, and then they would accept the results of such markets even if it disagreed with their intuitions. They don’t have to know how to bet well in such markets to accept that they are a better truth-seeking institution. But yes, being a truth-seeker can be very different from believing that you are one.
Btw, I only just discovered the “inbox” that lets me find responses to my comments.
This sounds like you’re postulating people who have good taste in rationalist institutions without having good taste in rationality. Or you’re postulating that it’s easy to push on the former quantity without pushing on the latter. How likely is this really? Why wouldn’t any such effort be easily hijacked by institutions that look good to non-rationalists?
Eliezer, to the extent that any epistemic progress has been made at all, was it not ever thus?
To give one example: the scientific method is an incredibly powerful tool for generating knowledge, and has been very widely accepted as such for the past two centuries.
But even a cursory reading of the history of science reveals that scientists themselves, despite having great taste in rationalist institutions, often had terrible taste in personal rationality. They were frequently petty, biased, determined to believe their own theories regardless of evidence, defamatory and aggressive towards rival theorists, etc.
Ultimately, their taste in rational institutions coexisted with frequent lack of taste in personal rationality (certainly, a lack of Eliezer-level taste in personal rationality). It would have been better, no doubt, if they had had both tastes. But they didn’t. But in the end, it wasn’t necessary that they did.
I would also make some other points:
1. People tend to have stronger emotive attachments—and hence stronger biases—in relation to concrete issues (e.g. “is the theory I believe correct”) than epistemic institutions (e.g. “should we do an experiment to confirm the theory”). One reason is that such object level issues tend to be more politicised. Another is that they tend to have a more direct, concrete impact on individual lives (N.B. the actual impact of epistemic institutions is probably much greater, but for triggering our biases, the appearance of direct action is more important (cf thought experiments about sacrificing a single identifiable child to save faceless millions)).
2. Even very object-level biased people can be convinced to follow the same institutional epistemic framework. After all, if they are convinced that the framework is a truth-productive one, they will believe it will ultimately vindicate their theory. I think this is a key reason why competing ideologies agree to free speech, why competing scientists agree to the scientific method, why (by analogy) competing companies agree to free trade, etc.
[The question of what happens when one person’s theory begins to lose out under the framework is a different one, but by that stage, if enough people are following the epistemic framework, opting out may be socially impossible (e.g. if a famous scientist said “my theory has been falsified by experiment, so I am abandoning the scientific method!”, they would be a laughing stock)]
3. I really worry that “everyone on Earth is irrational, apart from me and my mates” is an incredibly gratifying and tempting position to hold. The romance of the lone point of light in an ocean of darkness! The drama of leading the fight to begin civilisation itself! The thrill of the hordes of Dark Side Epistemologists, surrounding the besieged outpost of reason! Who would not be tempted? I certainly am. But that is why I suspect.
I wonder: Whether a world “with lots of high-grade rationalists” necessarily is a friendly world. I doubt it. So I think rationality has to be tempered with something else. Let’s just call it “the milk of human kindness”.
I’m surprised to see this go negative.
Granted, Marshall didn’t explain his position in any detail. But his position is not indefensible, and I’m glad he’s willing to share it.
Downvote this heretic! I wannt see him on −50 Karma, dammit! ;-0
Thanks Roko—nice with a bit of humour—btw your wish is almost granted I’ve lost 23 points in the space of 12 hours. Rationalists are fun people.....
How did you manage that!? What I want to know is what were the 3 people who downvoted my humorous comment thinking? Maybe 3 out of all the 10 or so people still reading this thread actually thought I was serious and downvoted me for ingroup bias? Or maybe people think that humor is a no-no on LW? I can see how too much humor would dilute the debate. Writing humorous comments is fun, and probably good in small amounts, but if it caught on this could turn into a social space rather than an intellectual one…
It doesn’t take much—just one jerk systematically downvoting a page or two of your existing comments. I lost like 37 points in less than an hour that way a few days ago. We really need separate up/down counts, or better yet ups and downs per voter, so you can ignore systematic friend upvotes and foe downvotes.
Are we already getting this behavior? I’ll have to start looking into voting patterns… Sigh.
Have you looked at Raph Levien’s work on attack resistant trust metrics?
Couldn’t it also be due to a change in the karma calculation rules in order to, say, not take your own upvote in account on karma calculations? I remember that was mentioned, but don’t know if it was implemented in the meantime.
Edit: Well, it seems that it isn’t implemented yet, since posting this got me a karma point :)
If your picture of a high-grade rationalist is still this Spock crap, what are you doing here?
By principle of charity, I interpret Marshall as saying not that rationalists can’t be kind, but that rationalism alone doesn’t make you kind. Judging by my informal torture vs. pie experiments, I find this to be true. Rationality is necessary but not sufficient for a friendly world. We also need people who value the right kind of things. Rationality can help clarify and amplify morality, but it’s got to start from pre-rational sources. Until further research is done, I suggest making everyone watch a lot of Thundercats and seeing whether that helps :)
Of course, like with every use of the principle of charity, I might just be reading too much into a statement that really was stupid.
Your torture vs. pie experiment makes me think of another potential experiment. Is torture ever preferable to making, say, 3^^^3 people never have pie again? (In the sense of dust specks, the never eating pie is to be the entire consequence of the action. The potential pie utility is just gone, nothing else.)
By the principle of accuracy, I look up Marshall’s other comments: http://lesswrong.com/user/Marshall/
Marshall doesn’t have to be voted down for being wrong. He can be voted down for using an applause light and being vague.
“Marshall doesn’t have to be voted down for being wrong. He can be voted down for using an applause light and being vague.”
So can Eliezer_Yudkowsky.
“Marshall doesn’t have to be voted down for being wrong. He can be voted down for using an applause light and being vague”
I have stared at this sentence for a long time, and I have wondered and wondered. I too have read my comments again. They are not vague. Not in the slightest. I think they belong to a slightly other reference-set than the other postings and emphasize language as metaphor (which I think Eliezer calls appealing to applause lights).
I would call Eliezers quoted sentence as brutal. Majesticaly brutal—and I would think they have contributed to the 23 karma-points I lost in 12 hours of non-activity.
I have no wish to be a member of a club, who will not have me. I have no wish to be a member of a club with royal commands.
“I have no wish to be a member of a club, who will not have me.”
This is not the case. You’ve made over 30 comments; it’s trivial for an individual to swing your karma by large amounts. I note that your karma has made large swings in the ~30 minutes I’ve been considering this reply. If you want to discuss the group dynamics of LW then I have more to say, but I’m going to request (temporarily) that you don’t accuse me of groupthink or status seeking if you do.
Putting so much work into talking about these things isn’t the act of an isolated survivalist, though.
If no one person has a good grasp of all the material, then there will be significant insights that are missed. Science in our era is already dominated by dumb specialists who know everything about nothing. EY’s work has been so good precisely because he took the effort to understand so many different subjects. I’ll bet at long odds that a prediction market containing an expert on evo-psych, an expert on each of five narrow AI specialisms, an expert on quantum mechanics, an expert on human biases, an expert on ethics and an expert on mathametical logic would not even have produced FAI as an idea to be bet upon.
If people could see inside each others’ heads and bet on (combinations of) people’s thoughts, this would work.
In reality, what will happen is that a singly debiased single subject specialist will simply not produce any ideas for the prediction market that (a) involve more than his specialism and (b) would require him to debias in more than one way.
For example, a logic expert who suffers from overconfidence in the effectiveness of logic in AI will not hypothesize that maybe something other than a logical KR is appropriate for the semantic web. [people in my research group were shocked when I produced this hypothesis] A bayesian stats researcher will not produce this hypothesis because he doen’t know the semantic web exists; it isn’t part of his world.
What I am driving at with this comment is that the strength of connection between thoughts held in one mind is much greater than the strength of connection between thoughts in a market. In a market, two distinct predictions interact in a very simple way: their price. In a mind, two or more insights can be combined. If no individual mind is bias-free, then we lose this “single mind” advantage. [Apologies for comment deletion. It would be nice to have a preview button...]
I think Robin already pre-answered this, though perhaps with a touch of sarcasm: “Perhaps martial-art-style rationality makes sense for isolated survivalist Einsteins forced by humanity’s vast stunning cluelessness to single-handedly block the coming robot rampage.”
Can you offer any examples of generalists (and/or rationalists) who have produced significant insights besides Eliezer? When I look at history, I see subject specialists successfully branching out into new areas and making significant progress, whereas generalists/rationalists have failed to produce any significant work (look at philosophy).
Leibniz, Da Vinci, Pascal, Descartes, and John von Neumann spring immediately to mind for me.
There’s also Poincaré, often considered the last universalist. Kant is famous as a philosopher, but also worked in astronomy. Bertrand Russell did work in philosophy as well as mathematics, and was something of a generalist. Noam Chomsky is the linguist of the 20th century, and if you consider any of his political and media analysis outside of linguistics to be worthwhile, he’s another. Bucky Fuller. Charles Peirce. William James. Aristotle. Goethe. Thomas Jefferson. Benjamin Franklin. Omar Khayyám.
Just thought of Gauss, who in addition to his work in mathematics did considerable work in physics.
Herbert Simon: psychology and computer science (got an economics Nobel).
Alan Turing: don’t know how I could have forgotten him.
Norbert Wiener.
Good answers. Also, Pierre-Simon Laplace, one of the inventors of Bayesian statistics, was also an excellent astronomer and physicist (and briefly the French Minister of the Interior, of all things)
Yeah, Laplace certainly belongs close to the top of any such list.
There’s probably a few in there. I won’t try to dispute them on a case by case basis. There are, on the other hand, literally thousands of specialists who have achieved more impressive feats in their fields than many of the people you cite. (I take straightforward exception to Chomsky who founded a school of linguistics that’s explicitly anti-empirical.)
Not to defend anything specific about Chomsky’s program, but “anti-empirical” is unfair. “Anti-empiricist” would be more reasonable (though still missing the point, in my opinion).
I thought this would happen. The wisdom of my plan to list the top 10 academics first and then check whether they’re specialists or generalists is paying off…
Another method may be to list the top 10 achievements first and then check whether a specialist or a generalist. I imagine Prometheus was a generalist.
This is a good idea. But I think 10 is too few. It would be better to pick the top 100 or 200, and see how many people who contributed to multiple fields are on the list.
I’ve not created the list first, but have thought of which of those I listed above have done something that would belong on that list, so feel free to take possible confirmation bias into account on my part, but even after trying to account for that, I think many of the following accomplishments would be on the list:
Calculus: Leibniz, Newton
Physics: Newton [forgot about Newton originally, but he was a generalist]
Entscheidungsproblem, Turing machine: Turing
Too much important math to list: Gauss
Contributions to quantum mechanics, economics & game theory, computer science (we’re using a von Neumann-style computer), set theory, logic, and much else: von Neumann
It’s worth remembering that what we’re looking for is not just people who contributed to multiple fields but generalists/rationalists: people who took a “big picture” view. (I’m willing to set aside the matter of whether their specific achievements were related to their “big picture” view of things since it will probably just lead to argument without resolution.) Leibniz would definitely fall into that category, for example, but I’m not sure Newton would. He had interests outside of physics (religion/mysticism) but they weren’t really related to one another.
what’s the difference between being a generalist and contributing to multiple fields?
No, no! This is the meat of the question. If it were the case that generalism correlated with but did not cause great insights (for example, in a world that forced all really clever people to study at least 3 subjects for their whole academic lives this would be the case), then my original argument would fail.
It should be noted that Turing and Shannon both studied with Norbert Wiener, and he might have come up with most of their interesting ideas (and possibly von Neumann’s as well). Also, Wiener founded the study of cybernetics, made notable contributions to gunnery, and made the first real contribution to the field of computer ethics.
ETA: not to discredit the work of Turing, Shannon, and von Neumann, but rather to note that Wiener is definitely someone who made major contributions and should be on the ‘generalists’ list.
Wiener is on the original list I gave a couple of posts up.
Do you have a reference for Turing studying with Wiener and Turing getting his ideas from him? I checked all pages in Hodges’s biography of Turing that mention Wiener, and none of them mention that he studied with Wiener.
Turing’s Entscheidungsproblem paper (which also introduced the Turing machine) was published in 1936. The only (in-person) connection between them I found (though I didn’t search other than checking the bio) is that Wiener spoke with Turing about cybernetics in 1947 while passing by on his way to Nancy.
Are there specific discoveries you believe are falsely attributed to Turing, von Neumann, or Shannon, and can you provide any evidence?
I like Eliezer’s writing, but I think he himself has described his work as “philosophy of AI”. He’s been a great popularizer (and kudos to folks like him and Dawkins), but that’s different from having “produced significant insights”. Or perhaps his insight is supposed to be “We are really screwed unless we resolve certain problems requiring significant insights!”.
Ok, the best way for me to answer this question is to list the 10 most important scientists/academics of all time, and then look them up on wikipedia. I’ll write down the list, and then comment again once I’ve ascertained how “generalist” they are. So, in order of importance:
Galileo
Darwin
Newton
Descartes
Socrates
Aristotle
Plato
Hume
Einstein
Francis Bacon
EDIT: I kind of picked these guys at random out of “famous important academics”. Berners-Lee is on my mind as I study the semantic web. The main point of the exercise is that I wrote down the names before I went and read their wikipedia articles to see how much they’re generalists. Do feel free to suggest changes to this list. Once some consensus is reached, I will post the analysis. I kicked pythagoras off in favor of Francis Bacon, since Bacon seems to be particularly relevant to this site’s interests, and the article on Pythagoras disputes the worth of his science. Strictly speaking, this is a bit naughty of me, but what the hell—I’ll allow this one indulgence. Note that I didn’t look at Francis Bacon’s article before I decided he was to go on the list; I was spurred into including him by scientism’s comment below.
Roko: rather than picking out of random, it’d be better to start with a survey of the historical literature. Fortunately, the search and statistical ranking has already been done in Human Accomplishment.
For the combined science index, we get:
Newton
Galileo
Aristotle
Kepler
Lavoisier
Descartes
Huygens
Laplace
Einstein
Faraday
It’s a list that seems reasonable to me, as surprising as Lavoisier, Huygens, and Faraday may be.
OK, that’s an interesting list.
Of course, it misses out the philosophers, but they appear in the “western philosophy” list. Since aristotle, plato, descartes and hume all appear in the top ten of that list, it seems that the only odd ones out in my list are bacon, darwin and socrates.
But, a larger list will not hurt us. So I’ll throw in the top ten from combined sciences, and the top ten from philosophy. Corr, that’s going to be quite some work to do…
I think an important issue in this generalist/specialist debate and this attempt to create a list of the most important figures is that the historical time frame may be very relevant.
As the world becomes increasingly complex and fields of study, old and new, become increasingly specialized, would this not affect the ability of a generalist/specialist to produce a significant insight or make a significant contribution?
Perhaps it makes more sense to consider much more recent people as examples if we want to apply this to society as it stands now.
Darwin was almost preempted by Wallace. Newton and Leibniz arrived at the same calculus independently, and similar work was done by Seki Kowa at the same time. They were merely there first and most prominently, but not uniquely. I think to satisfy importance, we want cut vertex scientists and academics.
What constitutes a “cut vertex” here depends entirely on how far you want to take the counterfactual. Who do you shoot so that humanity makes no further progress, ever?
Stanislav Yevgrafovich Petrov?
Socrates is an odd fellow to have on the list, since there aren’t any works by Socrates. If you think Plato should be on the list, feel free to kick Socrates off.
As a physicist, I’ve always been partial to Maxwell’s work—he deduced the induction of a curled magnetic field by a changing electric field solely from mathematical considerations, and from this, was able to guess the nature of light before any other human.
I’ve mixed feelings about Descartes. The pull of the Cartesian Theater has muddling effects in serious cognitive philosophy. On the other hand, by making the concept explicit, he did make it easier for others to point out that it was wrong.
Regarding the Cartesian Theater, I think it obviously had an impact on Global Workspace Theory, which actually seems to be going in the right direction.
And let’s not forget Decartes’s many other contributions. The coordinate grid and analytic geometry, anyone?
Exactly. Descartes laid the foundation for future progress.
The top-ten list needs Galileo. Galileo > Newton. Galileo > Einstein.
And Berners-Lee? If he had never started the WWW, within 2 years of when he did start it, someone else would have started something very similar. (And his W3C does dumb things.) If you want a contributor to the internet on the list, I humbly suggest J.C.R. Licklider, his protogee Roberts, or one of the four authors of “The End-to-end Argument”.
Berners-Lee? Recency effect much?
Darwin? Seriously? The essential kernel of his theory is so easy to understand that I’m reluctant to give him much credit for inventing it.
Most truly great insights feel obvious in retrospect.
Massive hindsight bias. Whether we, as a race, are proud of it or not, it wasn’t until Darwin, only 150 years ago, that someone seriously suggested and developed it.
Natural selection is the combination of two ideas: 1. Population characteristics change over time if members of the population are systematically disallowed reproduction. 2. Nature systematically disallows reproduction.
I’m willing to accept that I’m suffering from hindsight bias. But will you at least give me that his theory is much easier to understand than any of the others? And maybe a few guesses on the topic of why it was so hard to think of?
Also, even if an insight is rare, that doesn’t mean its bearer deserves credit. Many inventors made important accidental discoveries, and I imagine luck must have factored into Darwin’s discovery somehow as well. If 1% of biologists who had gone on the voyage Darwin went on also would have developed the theory, does he still deserve to be on the list of the top ten intellectuals?
Addendum: Here is an argument that ancient scientists and mathematicians don’t deserve as much credit as we give them: they were prolific. We have no modern equivalent of Euler or Gauss; John Von Neumann was called “the last of the great mathematicians”. There are two possibilities here: either the ancient thinkers were smarter than we were, or their accomplishments were more important and less difficult than those of modern thinkers. The Flynn effect suggests that IQs are rising over time, so I’m inclined to believe that their accomplishments were genuinely less difficult.
And even if making new contributions to these fields isn’t getting more difficult, surely you must grant that it must become more difficult at some point, assuming that to make a new contribution to a field you must understand all the concepts your contribution relies on, and all the concepts those concepts rely on, etc.
Extraordinarily so, yes—it does astonish me that no one hit it before. Nonetheless, the empirical fact remains, so...
I suppose the sense of “mystery” people attached to life played into it somewhat.
People were breeding animals, people were selecting them, and...socially there was already some idea of genetic fitness. Men admired men who could father many children.The idea of heredity was there.
Honestly, the more I think of it, the more I share your confusion. It is deeply odd that we were blinded for so long. Perhaps we should work to figure out how this happened, and whether we can avoid it in the future.
I don’t think luck can factor in quite as much as you imagine though. We’re not attempting to award credit, so much as we are attempting to identify circumstances which tend to produce people who tend to produce important insights. Darwin’s insight was incredibly important, and had gone unseen for centuries. To me, that qualifies him.
Even if you put it at a remove, even if you say, well, Darwin was uniquely inspired by his voyage, another biologist could have done the same, then the voyage becomes important. Why didn’t another biologist wind up on a voyage like that? What can we do to ensure that inspiring experiences like that are available to future intellectuals? In this way, Darwin’s life remains an important data point, even if—especially if—we deny that there was anything innately superior about the man.
Agreed, completely—they pulled the low-hanging fruit from the search space.
I’m confused—do you mean that deism, specifically, made it hard to think of, or easy? And I’m not sure many were deists—I can’t find numbers, but I was under the impression deism was always a really small movement.
EDIT: nevermind, reference to deism was removed in an edit.
I meant that I thought the fact that so many took for granted the fact that God created the animals was one of the factors that made evolution hard to think of, and Darwin shouldn’t get genius status just because he overcame it. But then I remembered Lamarck and thought better of it. I still think it is a weak argument in favor of Darwin not being a genius, though.
I’d also suggest that Darwin’s insight was far less intuitive than the insights of Newton (although this may reflect just different degrees of hindsight bias).
Indeed, I suspect it does. Imagine not having calculus… or mechanics, and then having to reinvent it. That formalism has been in my head for the last 10 years, so it’s really hard for me to let go of it. Are you a physical sciences guy too?
I think an important issue in this generalist/specialist debate and this attempt to create a list of the most important figures is that the historical time frame may be very relevant.
As the world becomes increasingly complex and fields of study, old and new, become increasingly specialized, would this not affect the ability of a generalist/specialist to produce a significant insight or make a significant contribution?
Perhaps it makes more sense to consider much more recent people as examples if we want to apply this to society as it stands now.
Aubrey De Grey hasn’t yet been proved right, so he’s a tentative example, but he is a rare biological theorist where most biologists are specialized experimenters.
Out of interest, when you said:
Who were you thinking of?
I think philosophy is a good example. Philosophers are supposed to be more logical/rational than other people and have been generalists until recently (many still are). They have also failed to produce a single significant piece of work on par with anything found in science. Now, some people might disagree with that assessment, but I suspect their counterexamples would be chiefly in specialist sub-disciplines: formal logic, for example. I think to the degree that there has been “good philosophy” it’s found under the model of specialists working under the kind of robust institutional framework Robin alludes to rather than individual theorists taking a global perspective (philosophy as martial arts). I can’t think of any systematizers I’d credit with discovering truth. I do not think Socrates, Plato, Aristotle and Descartes discovered any substantial truths (Descartes mathematical work aside) so we probably differ there. Regardless, I think there’s a good argument to be made that historically truth has come from robust institutions involving many specialists (such as science) rather than brilliant lone thinkers taking a global perspective.
You seem to differ from the rest of the world, too. Wikipedia:
There’s a huge difference between being considered historically important and having discovered substantial truth. The Bible is historically important. It helped lay the foundations of Western culture. This is hardly disputable. It does not, however, contain much in the way of truth. Nor do the works of Plato and Aristotle.
To take one example: Aristotle laid down the foundation of what became modern science. Modern science became modern science as we think of it by rebelling against Aristotle’s a priori assumptions; without Aristotle, what science we have today would be very different, indeed.
I don’t think you can so easily dismiss Plato, Aristotle, Descartes, et al: without them we we wouldn’t be where we are today.
This is part of the problem I often detected at OB and see again here at LW: people with little respect for intellectual history.
Isaac Asimov was a generalist.
Make of that what you will.
Roko, great comment, but you should’ve just Edited. Why delete and repost?
Thanks. I wrote the original comment, then realized that I hadn’t read the post as thoroughly as I should have done and worried that I’d straw-manned Robin, so I deleted the comment not realizing that Robin had replied to it. When I’d read the post again and read my comment, I made a slight change and decided that the critique was on point and I was really critiquing Robin’s position, not a straw man. Preview would help slightly, because you could read your comment next to the OP and do a “did I straw man him?” sanity check.
FYI, I had replied to the previous version of the comment.
Robin said:
Or even combining two or three topics with 5 or 6 ways to debias… if you’re going to go to the effort of combining several academic subjects in one mind, it is almost certainly worth the effort of adding in the subject of “heuristics and biases/rationality arts”; at the cost of learning 1 more subject, you’ll improve your performance across the board, and in particular you’ll improve your ability to combine subjects as you’ll be in a good position to dispassionately weigh the merits of various approaches and synergies.
One problem with trusting the experts rather than trying to think things through for yourself is that you need a certain amount of expertise just to understand what the experts are saying. The experts might be able to tell you that “all symmetric matrices are orthonormally diagonalizable,” and you might have perfect trust in them, but without a lot of personal study and inquiry, the mere words don’t help you very much.
That doesn’t matter if the expert can say “hire this guy”, “invest in this company”, “vote for this guy”, or “donate to this charity”. If you’re doing some sort of complicated action with careful integration of expert advice, then it’s probably worthwhile becoming at least a semi-expert yourself.
All the worse if you are convinced that God hates diagonalizable matrices, and so you prefer not to believe the heathen.
Experts don’t just tell us facts; they also offer recommendations as to how to solve individual or social problems. We can often rely on the recommendations even if we don’t understand the underlying analysis, so long as we have picked good experts to rely on.
There is a key right there. Ability in rational thinking and understanding of common biasses can drastically impact who we consider as a good expert. The most obvious examples are ‘experts’ in medicine and economics. I suggest that the most influential experts in those fields are not those with the most accurate understanding.
Rationalist training could be expected to improve our judgement when choosing experts.
True. But it is still easier in many cases to pick good experts than to independently assess the validity of expert conclusions. So we might make more overall epistemic advances by a twin focus: (1) Disseminate the techniques for selecting reliable experts, and (2) Design, implement and operate institutions that are better at finding the truth.
Note also that your concern can also be addressed as one subset of institutional design questions: How should we reform fields such as medicine or economics so that influence will better track true expertise?
and if an expert says “all matrices are orthonormally diagonalizable”, it sounds equally impressive, but it is false as false can be.
But there are simply far too many areas of life involving putative “orthonormally diagonalizable matrices” for any one individual to be able to rationally investigate. At some point you have to take someone’s word for it; so rather than taking one expert’s word, you’re likely better off trusting a community of experts. A current example might be with global warming—most scientists seem to feel it’s a major issue.
Unfortunately, though, radical changes in thinking come usually come from the margin, e.g., Galileo. The hard part, it seems to me, is to distinguish between mere status quo convention and genuine expert agreement.
Without the study, you wouldn’t have a basis for understanding? (grin/duck/run)
Following the martial arts analogy, I guess that makes Robin a supporter of “Rationalist Gangs”.
One of the ways that I think that OB could have been better, and that I think LW could be more helpful, is to put a greater emphasis on practice and practical techniques for improving rationality in the writings here and to give many more real-life examples than we do.
When making a post that hints at any kind of a practical technique, posters could really make an effort to clearly identify the practical implications and techniques, to put all the practical parts together in the essay rather than mixing them throughout 15 paragraphs of justification and reasoning, and to highlight that practical part of the post.
The practical parts could be extracted and placed together somewhere in order to have one single place that people can go to easily find them. Perhaps the LW software could provide some kind of support for distinguishing the practice sections of a post, and the extraction and aggregation of the practical howto sections could be automated.
Hear, hear. Practice and practical techniques. Isn’t that what we’re after here?
Robin was kind enough not to say what overemphasizing the heroic individual rationalist implies about our true motivations.
That’s overly simplistic. Two people might have the same motivations and goals but disagree about the most effective way of achieving those goals. If you think that’s not the case, you should give an argument to that effect. If you think it doesn’t apply in the particular case that we all know you have in mind, you should give an argument to that effect.
I’m surprised the parent is rated up to 10 points. It indulges in armchair psychologizing with no supporting evidence or reasoning, and it interprets the situation in the least intellectually charitable way and assumes the worst of motivations.
I love your thesis and metaphor, that the goal is for us all jointly to become rational, seek, and find truth. But I do not “respect the opinions of enough others.” I have political/scientific disagreements so deep and frequent that I frequently just hide them and. worry. I resonated best with your penultimate sentence: “humanity’s vast stunning cluelessness” does seem to be the problem. Has someone written on the consequences of taking over the world? The human genome, presumptively adapted to forward it’s best interests in a competitive world, may have only limited rationality, inadequate to the tasks of altruism, global thinking, and numerical analysis. By this last phrase I refer to our overreaction to a burning skyscraper, when an equal number of deaths monthly on freeways, by being less spectacular or poignant, motivates a disproportionately low response. Surely the difference there is a “gut” reaction, not a cogent one. We need to change what we care about, but we’re hardwired to worry about spectacle, perhaps?
Unusual threat by a rival tribe. Retaliation necessary. Excuse to take politically self serving moves by surfing a tide of patriotic sentiment. That sort of thing. What you would expect monkeys to care about.
Maybe personal finance is a better analogy than Martial Arts. It’s useful for nearly anybody to know about personal finance, yet many people are lacking even in the basics. Some high-falutin stock market concepts may not be useful to the average joe, the same way advanced rationality (“better then Einstein”) may not be needed, but still, education about the basics is useful.
Sure, most prediction market traders could stand to review some rationality basics.
For whatever reason, the community here (so-called “rationalists”) is heavily influenced by overly-individualistic ideologies (libertarianism, or in its more extreme forms, objectivism). This leads to ignoring entire realms of human phenomena (social cognition) and the people who have studied them (Vygotsky, sociologists of science, ethnomethodology). It’s not that social approaches to cognition provide a magic bullet—they just provide a very different perspective on how minds work. Imagine if you stop believing that beliefs are in the head and locate themselves in a community or institution. If interested, you could start with How Institutions Think by Mary Douglas.
I am guilty as charged in being much more familiar with individualistic than socially oriented ideologies.
Why don’t you write some posts about techniques or discoveries from socially-oriented science that could help rationalists?
I would say Robin Hanson’s views on status fit quite well into the gap you perceive. I do find it interesting that status isn’t talked about more on Less Wrong.
Maybe I can tie this into what I think about the article. LW’s articles do currently take an individualist stance on rationality (although I doubt objectivism has any role in this). The “refinements” they propose are mostly alterations of cognitive habits, not suggested ways of changing group dynamics. But LW as a whole is not simply a bunch of iconoclasts. Rather, there appears to be a clear attempt to collectively change patterns of thought. People write stuff, get +/- karma, feel good/bad, update their beliefs and try again. So even though the content of LW is individually applicable, posters will naturally develop preferred topics of expertise, subjects on which they know enough to benefit the community by what they write. And developing expertise does benefit from the martial arts analogy.
Was there a time when we neglected status as a topic? wow. I don’t remember that.
The flaw in that is that ignores dissenters- to some extent, minorities in a community can dissent from the common belief.
This sounds to me a lot like “Imagine if you stop believing that information is in the genes and locate it in a species.”
I don’t think institutional effects on thought are a bad thing to study- institutions definitely have massive effects on the environments individuals operate in- but I think assigning thinking entity status to institutions is a bad way to approach that study. Thinking about information stored in species has a long and storied history of making worse predictions than thinking about information stored in genes.
But institutions certainly apply selection pressure on memes, and influence how memes replicate themselves and propagate. The analogy is also somewhat tenuous- institutions are far more fluid (almost by definition) in their boundaries than species. Because of their tremendous impact, institutional design deserves comparable attention to environmental design (architecture, agriculture, lots of smaller fields).
(We do already have those fields, though; the economy is the environment commercial institutions are built for (and other institutions reside in as well), and economists try to study it and design it. Public choice theorists help study the design of (primarily democratic) political institutions.)
As a martial arts enthusiast I have to concur that the practical survivability impact of my training is somewhat limited. In fact, I would go as far as to say that my martial art training is far less likely to save my life than is my previous sporting hobby, running.
The martial arts metaphor for rationality training applies as much to my motives for participation as it does for the training itself. I don’t expect to beat many armed assailants to a pulp in a dark alley and nor do I expect elimitating biasses from my cognition to make a dramatic impact on my success or life satisfaction. However, I relish every opportunity to push both my body and mind to their limits in elegance and performance. I am also attracted to subcultures that tend to non-exclusivity with skill based elitism.
I unashamedly confess that I’d be a rationalist even if it had absolutely no direct benefit (over participation in the activities of any other arbitrary non-rationilist subculture to a similar degree). But at the same time I have to concur with Robin on the best way to go about finding truth.
Absolutely. There is just something comforting in knowing that if the information I am relying apon is flawed, someone is losing money because of it. It’s even better to know that if you do find flaws you’ll be rewarded for doing so, not hunted down and persecuted as a ‘whistleblower’ or a heretic.
Unfortunately ‘designing institutions’ doesn’t sound like the hard part. The hard part is taking these institutions and making them an active reality. Diluting the influence of authority tends to go against the interests of those in authority, at least how they perceive it. Of course, that particular robotic rampage of human stupidity is not something I personally need to overcome with my own rationalist-fu. I can respect the opinions of Robin et. al. and eagerly keep abreast of their instights and practical solutions.
Yes, you are right that designing need not be the hard part. So I just changed “design” to “design and field.”
My krav maga instructor (a bouncer) used to emphasize that 90% of realistic self-defense is about avoiding trouble, and running is a battle-tested survival technique. I think running was the best way to keep your sanity in the Cthulhu role-playing too. So, the first line of self-defense: don’t open that old book, run away and read what people at LW are saying.
90% of actual self defense confrontations involve extremely simple techniques. hard core martial arts training is about beating other martial artists. if you just want practical survival skills you learn the control techniques cops use and just practice.
On this point, we should also be talking about effective evangelism for rationality.
One thing I thought of is to print out a bunch of copies of this paper and start giving it to the Greenpeace activists I see around my community college.
We should learn how to identify trustworthy experts. Is there some general way, or do you have to rely on specific rules for each category of knowledge?
Two examples of rules are never trust someone’s advice about which specific stocks you should buy unless the advisor has material non-public information, and be extremely skeptical of statistical evidence presented in Women Studies’ journals. Although both rules are probably true you obviously couldn’t trust financial advisers or Women Studies’ professors to give them to you.
Have you evaluated statistical evidence in Women Studies’ journals?
Prediction markets can forecast the accuracy or fame of purported experts. But preferably you’d accept the market estimate on your question and so not need to know who is an expert.
This is ofcourse exactly the point. People will be people. The solution is to depersonalize, not pick some fine guy and put faith in him. Trying to find out which experts to trust feels to me like asking which tyrants can be best trusted. Experts are valuable (unlike tyrants), but is better be placed in a market, rather than in individual people.
Obviously it helps if the experts are required to make predictions that are scoreable. Over time, we could examine both the track records of individual experts and entire disciplines in correctly predicting outcomes. Ideally, we would want to test these predictions against those made by non-experts, to see how much value the expertise is actually adding.
Another proposal, which I raised on a previous comment thread, is to collect third-party credibility assessments in centralized databases. We could collect the rates at which expert witnesses are permitted to testify at trial and the rate at which their conclusions are accepted or rejected by courts, for instance. We could similarly track the frequency with which authors have their articles accepted or rejected by journals engaged in blind peer-review (although if the review is less than truly blind, the data might be a better indication of status than of expertise, to the degree the two are not correlated). Finally, citation counts could serve as a weak proxy for trustworthiness, to the degree the citations are from recognized experts and indicate approval.
The suggestions from the second paragraph all seem rather incestuous. Propagating trust is great but it should flow from a trustworthy fountain. Those designated “experts” need some non-incestuous test as their foundation (a la your first paragraph).
Internal credibility is of little use when we want to compare the credentials of experts in widely differing fields. But is is useful if we want to know whether someone is trusted in their own field. Now suppose that we have enough information about a field to decide that good work in that field generally deserves some of our trust (even if the field’s practices fall short of the ideal). By tracking internal credibility, we have picked out useful sources of information.
Note too that this method could be useful if we think a field is epistemically rotten. If someone is especially trusted by literary theorists, we might want to downgrade our trust in them, solely on that basis.
So the two inquiries complement each other: We want to be able to grade different institutions and fields on the basis of overall trustworthiness, and then pick out particularly good experts from within those fields we trust in general.
p.s. Peer review and citation counting are probably incestuous, but I don’t think the charge makes sense in the expert witness evaluation context.
Another good example is the legal system. Individually it serves many participants poorly on a truth-seeking level; it encourages them to commit strongly to an initial position and make only those arguments that advance their cases, while doing everything they can to conceal their cases’ flaws short of explicit misrepresentation. They are rewarded for winning, whether or not their position is correct. On the other hand, this set-up (in combined with modern liberalized disclosure rules) works fairly well as a way of aggregating all the relevant evidence and arguments before a decisionmaker. And that decisionmaker is subject to strong social pressures not to seek to affiliate with the biased parties. Finally, in many instances the decisionmaker must provide specific reasons for rejecting the parties’ evidence and arguments, and make this reasoning available for public scrutiny.
The system, in short, works by encouraging individual bias in service of greater systemic rationality.
The legal system does supposedly encourage individual bias to aggregate evidence; I’m more of a skeptic about how well it actually does this in practice though.
Care to explain the basis for your skepticism?
Interestingly, there may be a way to test this question, at least partially. Most legal systems have procedures in place to allow judgments to be revisited upon the discovery of new evidence that was not previously available. There are many procedural complications in making cross-national comparisons, but it would be interesting to compare the rate at which such motions are granted in systems that are more adversarially driven versus more inquisitorial systems (in which a neutral magistrate has more control over the collection of evidence).
The rationality dojo seems to be part of a world where “we” work together for truth, at least if you don’t take the dojo metaphor too seriously. I assume that training individuals to be more rational is part of your optimal strategy. So I take it that you argument is that we should emphasize individual training less relative to designing institutions which facilitate truth-finding despite our biases. Am I understanding you correctly?
Yup.
Robin wrote: “Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment.” But this does not mean that martial arts cannot also be good training if you assume a more benign environment. Environments are known to be unpredictable.
One of the most important insights a person gains from martial arts training is to understand one’s limits—which relates directly to the bias of overconfidence. If martial arts training enables a person to project an honestly greater degree of self confidence, then the signaling benefit alone may merit the effort. Does rationality training confer analogous signaling benefits?
Good point. Fortunately, I think the OB and LW blogs have helped me understand my limits, in the sense that it showed me many errors-in-rationality in the ways I used to (and unfortunately, currently do still) think.
It probably does. If you go to cocktail parties tossing around terms like “Bayesian updating with Occam priors” or “Epistemic rationality” and sound like you really know what you’re talking about, then you’ll probably exude this signal of being a fairly smart person.
But you have to ask yourself if your goal is to sound smart, or to actually be smart.
An attempt to even find Einsteins is doomed unless the number of them is large enough as a fraction of the population. (cf: Eliezer’s introduction to Bayes.)
On the other hand, a purely aggregate approach is a dirty hack that somehow assumes no (irrational) individual is ever able to be a bottleneck to (aggregate) good sense. It’s also fragile to societal breakdown.
It seems evident to me that what’s really urgent is to “raise the tide” and have it “lift all boats”. Because then, tests start working and the individual bottleneck is rational.
I predict that aggregate approaches are going to be more common in the future than waiting around for an Einstein-level intelligence to be born.
For example, Timothy Gowers recently began a project (Polymath1) to solve an open problem in combinatorics through distributed proof methods. Current opinion is that they were probably successful; unfortunately, the math is too hard for me to render judgment.
Now, it’s possible that they were successful because the project attracted the notice of Terence Tao, who probably qualifies as an Einstein-level mathematician. If you look at the discussion, Tao and Gowers both dominate it. On the other hand, many of the major breakthroughs in the project didn’t come from either of them directly, but from other anonymous or pseudo-anonymous comments.
The time of an Einstein or Tao is too valuable for them to do all the thinking by themselves. We agree that raising the tide is absolutely necessary for this kind of project to grow.
For Polymath the kind of desired result of collaboration is clear to me: a (new) (dis-) proof of a mathematical statement.
What is the kind of desired result of collaborating rationalists?
From the talk about prediction markets it seems that “accurate predictions” might be one answer. But predictions of what? Would we need to aggregate our values to decide what we want to predict?
The phrase in Robin’s post was “join together to believe truth”, so perhaps the desired result is more true beliefs (in more heads)? Did you envision making things that are more likely to be true more visible, so that they become defaults? In other words, caching the results of truth-seeking so they can be easily shared by more people?
“How can we join together to believe truth?”
Yes!
I am being deluged here on LW by all the posts and comments. Spending so much time in front of the screen does not seem sensible or rational.
What to do?
Spending so much time in front of the screen does not seem sensible or rational.
If you didn’t have a better plan for making the world a better place already, then spending time thinking about how to improve the general level of optimisation for good things seems like one of the more productive ways to waste time on the Internet.
Several weeks ago I unsuccessfully resolved to start doing community service every weekend.
If no one person has a good grasp of all the material, then there will be insights that are missed. Science in our era is already dominated by dumb specialists who know everything about nothing. EY’s work has been so good precisely because he took the effort to understand so many different subjects. I’ll bet at long odds that a prediction market containing an expert on evo-psych, an expert on each of five narrow AI specialisms, an expert on quantum mechanics, an expert on human biases, an expert on ethics and an expert on mathametical logic would not even have produced FAI as an idea to be bet upon.
Combining two or even three particular topics can the thing that you specialize in.
I was just about to respond by asking if you would advocate a website in the beliefs of the members are aggregated based on their reliability, then I remembered: prediction markets.
I’m guessing you’re not pushing a real prediction market due to legal issues, but why not create one that uses token points instead of real money?
My first thought was slightly different: have testable predictions, as in a market, but the system treats each persons’ likelihood ratios as evidence (as well as the tags for the prediction, to account for each person’s area of expertise)
It seems to me that the real issue still is a supply of testable problems.
It does take work to create judgeable claims, but there are other real issues as well.
Foresight Exchange
Laws can, and in this case should, be changed.
Aren’t we the supposed martial rationalists of the humanity? Aren’t we the ones being paid (I wish) to protect the neighborhoods from the marauding apologists? Aren’t we the ones to go to the wild places and battle dragons?
Division of labour
I would guess that martial arts are so frequently used as a metaphor for things like rationality because their value is in the meta-skills learned by becoming good at them. Someone who becomes a competent martial artist in the modern world is:
Patient enough to practice things they’re not good at. Many techniques in effective martial arts require some counter-intuitive use of body mechanics that takes non-trivial practice to get down, and involve a lot of failure before you achieve success. This is also true of a variety of other tasks.
Possessing the fine balance of humility and confidence required to learn skills from other people. Generally if you’re going to get anywhere in martial arts, you’re not going to derive it from first principles. This is true of most human knowledge domains. Learning to be a student or apprentice is valuable, as is learning to respect the opinions of others when they demonstrate their competence.
Practiced in remaining calm and thinking strategically under pressure. If one is taught to competently handle a high-stress situation such as a physical fight, one can make decisions quickly and confidently even when stressed. This skill is useful for reasons I hope I don’t have to go into depth on.
Able to engage mirror neurons to understand and reason about the nonverbal behavior of other humans, and somewhat understand their intentions and strategies. This is useful in a fight and taught by many martial arts, but extremely useful in other contexts, not the least of which being negotiation with semi-cooperative individuals.
Probably pretty physically fit. It’s a decent whole-body exercise regimen, and there are numerous benefits to exercising frequently and keeping in good shape. It is probably not the most efficient exercise regimen out there by a long shot, but it may be one that is intrinsically fun to do for a lot of people, and thus it’s likely that they’ll stick with it.
Almost incidentally, reasonably capable of defending oneself in one of the few instances where civilized behavior temporarily breaks down (An argument with a seemingly reasonable person who quickly becomes unreasonable, perhaps alcohol is involved? I don’t know. Fights are low-stakes and uncommon these days but they still happen). This is kind of a weird edge case in a modern society but might non-trivially prevent injury or gain you status when it comes up.
Note that there are a lot of vectors by which one can gain these meta-skills. While there are a bunch of martial arts enthusiasts out there who would probably claim that martial arts have the exclusive ability to grant you one or more of these, I really doubt that’s the case. However, martial arts get a pretty good amount of coverage in real and fictional cultural reference frames that we can be reasonably confident most people are familiar with, and it’s not a bad example of a holistic activity that can hone a lot of these meta-skills.
It’s also worth noting that while the skills involved in interacting with a society of people you trust and want to work with are often different from the skills involved in becoming a competent individual, many of the latter can be helpful in the former. I would much rather be on a team with a bunch of people who understand the meta-skill of staying calm under pressure, or the meta-skill of making their beliefs pay rent, than be on a team with a bunch of people who don’t. Aggregated individual prowess isn’t the only factor for group success, and it may not even be the most important one, but it certainly doesn’t hurt.
Why do these prediction markets have to be subsidized? In the U.S., online prediction markets are currently considered internet gambling and are hampered. Is there a reason legal, laissez-faire prediction markets couldn’t take hold?
Prediction markets are currently immature and controversial, and so might have trouble bootstrapping.
Their legality is problematic. (The IEM had to get a special exemption from the SEC to run.)
Prediction markets like Intrade currently are structured in ways bad for financial return. (IIRC, the issue is that Intrade offers a very low or no interest rate on deposited funds—the float is a source of profit for it.)
Long-run prediction markets like many possible scientific or academic questions are not financially viable (see ‘opportunity cost’), while sports and gambling bets are inherently short-term, taking no more than a year.
A succession of short-term markets might help, but then you have the problem that with the natural low prices on ‘success’ shares, it’s hard to make any profit. (eg. imagine a ‘cold fusion in 2010’ market—it’d be at a penny or two. Suddenly shares double due to a new paper! But because it’s so lightly traded, you only made a dime on your prescient long position.)
(Did I miss any?)
Hence, subsidies. Peter McCluskey ran a market-maker bot (OB coverage). Some traders discuss bots; note that they say it’s hard to arbitrage Intrade & Betfair in part due to low volume and fees and costs (McCluskey’s page mentions that Intrade “agreed not to charge any trading or expiry fees”.)
Thanks. I’ve been curious about the interest question for a while.
McCluskey’s followup: http://www.bayesianinvestor.com/blog/index.php/2008/11/13/automated-market-maker-results/
Googling some more, relevant links are http://www.overcomingbias.com/2007/11/intrade-fee-str.html and http://bb.intrade.com/intradeForum/posts/list/4471.page
Probably could find more examples of how Intrade is not an optimal prediction market using this tag: http://www.overcomingbias.com/tag/prediction-markets
A very good point!
But I can’t easily explain why it is a good point without violating the ban on mention of AI.
This observation doesn’t invalidate Less Wrong. Someone still has to study these things. But the emphasis on individualism here can diminish awareness of the big picture.
I think it was just brainstorming based on Eliezer’s post; he also wrote about the sanity water line, which I see your rational society approach fitting in with. Maybe a dojo is a bit extreme, but I think a zendo isn’t implausible, with people working on rationality koans. Or maybe rationality group therapy, where people can express potential irrationality that they can receive non-judgemental feedback on. Grassroots bottom up approaches could work with larger top down approaches to create the rational society, or whatever word Yvain might find less taboo :)
If one goes off the notions of others without coming to conclusions for themselves they’re just as blind as an evangelical christian. True insight can only come from within. That’s why reason is of premium importance.
It is important to denote the difference between insight and belief, however; for insight is based off of rationality and logic whereas belief is based on primal emotions and instincts.
Evangelical Christians sometimes form their own insights and conclusions, even about things with religious significance.
I should also add that I think group rationality techniques are important. We’ve already seen that being a good group rationalist means acting differently than just trying to be individually as accurate as possible. [in particular, you should not be swayed by what the rest of the group thinks].
No, you should still be swayed, you just shouldn’t represent the swaying as being independent analysis. You also should take into account that the opinions of other group members may have been caused by swaying rather than independent analysis, but that was already true in the individual accuracy case.
Right, I see. For a group of perfect rationalists, yes, I agree, at least to an extent.
The problem is that this is very hard to do in reality. If I have 15 commenters down/upvote a post or comment I make on LW, how do I know to what extent they’re providing 15 distinct opinions vs. 1 opinion followed by 14 swingers? How do I estimate the swinginness coefficient? It seems that group rationality is maximized if individuals state their own opinions on a particular question independently of the group, and only update once a really overwhelming consensus is reached, some time after that particular discussion is over. The group’s decision is then the average on n independent opinions. This would make for a very clever group iff each individual is quite clever.
I should emphasize: this will mean that the group (overall) displays smart behavior, but that the individuals do worse than they otherwise would.
Also, how relevant is Robin’s paper on Aumann’s agreement theorem for wannabe/imperfect bayesians to this debate? It seems that he might (under certain assumptions) have proved the opposite f what I’m claiming here.
Roko, when you run into a case of “group win / individual loss” on epistemic rationality you should consider that a Can’t Happen, like violating conservation of momentum or something.
In this case, you need to communicate one kind of information (likelihood ratios) and update on the product of those likelihood ratios, rather than trying to communicate the final belief. But the Can’t Happen is a general rule.
I’d like to see a proof if it’s that fundamental. Is it theorem x.xx in one of the Aumann agreement papers?
Really!? No exceptions?
This doesn’t feel right. If it is right, it sounds important. Please could you elaborate?
Not being swayed means not taking advantage of your group membership.
Silly robot, Less Wrong is for kids!