Yes, it would be silly to think of ourselves as isolated survivalists in a society where so many people are signed up for cryonics, where Many-Worlds was seen as retrospectively obvious as soon as it was proposed, and no one can be elected to public office if they openly admit to believing in God. But let us be realistic about which Earth we actually live in.
I too am greatly interested in group mechanisms of rationality—though I admit I put more emphasis on individuals; I suspect you can build more interesting systems out of smarter bricks. The obstacles are in many ways the same: testing the group, incentivizing the people in it. In most cases if you can test a group you can test an individual and vice versa.
But any group mechanism of that sort will have the character of a band of survivalists getting together to grow carrots. Prediction markets are lonely outposts of light in a world that isn’t so much “gone dark” as having never been illuminated to begin with; and the Policy Analysis Markets were burned by a horde of outraged barbarians.
We have always been in the Post-Apocalyptic Rationalist Environment, where even scientists and academics are doing it wrong and Dark Side Epistemology howls through the street; I don’t even angst about this, I just take it for granted. Any proposals for getting a civilization started need to take into account that it doesn’t already exist.
Sounds like you do think of yourself as an isolated survivalist in a world of aliens with which you cannot profitably coordinate. Let us know if you find those more interesting systems you suspect can be built from smarter bricks.
It’s pretty hard to be isolated in a world of six billion people. The more key question is the probability of coordinating with any randomly selected person on a rationalist topic of fixed difficulty, and the total size of the community available to support some number of institutions.
To put it bluntly, if you built the ideal rationalist institution that requires one million supporters, you’d be in trouble because the 99.98th percentile of rationality is not adequate to support it (and also such rationalists may have other demands on their time).
But if you can build institutions that grow starting from small groups even in a not-previously-friendly environment, or upgrade rationalists starting from the 98th percentile to what we would currently regard as much higher levels, then odds look better for such institutions.
We both want to live in a friendly world with lots of high-grade rationalists and excellent institutions with good tests and good incentives, but I don’t think I already live there.
Even in the most civilized civilizations, barbarity takes place on a regular basis. There are some homicides in dark alleys in the safest countries on earth, and there are bankruptcies, poverty, and layoffs even in the richest countries.
In the same way, we live in a flawed society of reason, which has been growing and improving with starts and fits since the scientific revolution. We may be civilized in the arena of reason in the same way you could call Northern Europe in the 900s civilized in the arena of personal security: there are rules that nearly everyone knows and that most obey to some extent, but they are routinely disrespected, and the only thing that makes people really take heed is the theater of enforcement, whether that’s legally-sanctioned violence against notorious bandits or a dressing-down of notorious sophists.
Right now, we are only barely scraping together a culture of rationality, it may have a shaky foundation and many dumber bricks, but it seems a bit much to say we don’t have one.
Let me us distinguish “truth-seekers”, people who respect and want truth, from “rationalists”, people who personally know how to believe truth. We can build better institutions that produce truth if only we have enough support from truth-seekers; we don’t actually need many rationalists. And having rationalists without good institutions may not produce much more shared accessible truth.
I’m not sure I can let you make that distinction without some more justification.
Most people think they’re truth-seekers and honestly claim to be truth-seekers. But the very existence of biases shows that thinking you’re a truth-seeker doesn’t make it so. Ask a hundred doctors, and they’ll all (without consciously lying!) say they’re looking for the truth about what really will help or hurt their patients. But give them your spiel about the flaws in the health system, and in the course of what they consider seeking the truth, they’ll dismiss your objections in a way you consider unfair. Build an institution that confirms your results, and they’ll dismiss the institution as biased or flawed or “silly”. These doctors are not liars or enemies of truth or anything. They’re normal people whose search for the truth is being hijacked in ways they can’t control.
The solution: turn them into rationalists. They don’t have to be black belt rationalists who can derive Bayes’ Theorem in their sleep, but they have to be rationalist enough that their natural good intentions towards truth-seeking correspond to actual truth-seeking and allow you to build your institutions without interference.
I had in mind that you might convince someone abstractly to support eg prediction markets because they promote truth, and then they would accept the results of such markets even if it disagreed with their intuitions. They don’t have to know how to bet well in such markets to accept that they are a better truth-seeking institution. But yes, being a truth-seeker can be very different from believing that you are one.
Btw, I only just discovered the “inbox” that lets me find responses to my comments.
This sounds like you’re postulating people who have good taste in rationalist institutions without having good taste in rationality. Or you’re postulating that it’s easy to push on the former quantity without pushing on the latter. How likely is this really? Why wouldn’t any such effort be easily hijacked by institutions that look good to non-rationalists?
Eliezer, to the extent that any epistemic progress has been made at all, was it not ever thus?
To give one example: the scientific method is an incredibly powerful tool for generating knowledge, and has been very widely accepted as such for the past two centuries. But even a cursory reading of the history of science reveals that scientists themselves, despite having great taste in rationalist institutions, often had terrible taste in personal rationality. They were frequently petty, biased, determined to believe their own theories regardless of evidence, defamatory and aggressive towards rival theorists, etc. Ultimately, their taste in rational institutions coexisted with frequent lack of taste in personal rationality (certainly, a lack of Eliezer-level taste in personal rationality). It would have been better, no doubt, if they had had both tastes. But they didn’t. But in the end, it wasn’t necessary that they did.
I would also make some other points: 1. People tend to have stronger emotive attachments—and hence stronger biases—in relation to concrete issues (e.g. “is the theory I believe correct”) than epistemic institutions (e.g. “should we do an experiment to confirm the theory”). One reason is that such object level issues tend to be more politicised. Another is that they tend to have a more direct, concrete impact on individual lives (N.B. the actual impact of epistemic institutions is probably much greater, but for triggering our biases, the appearance of direct action is more important (cf thought experiments about sacrificing a single identifiable child to save faceless millions)).
2. Even very object-level biased people can be convinced to follow the same institutional epistemic framework. After all, if they are convinced that the framework is a truth-productive one, they will believe it will ultimately vindicate their theory. I think this is a key reason why competing ideologies agree to free speech, why competing scientists agree to the scientific method, why (by analogy) competing companies agree to free trade, etc. [The question of what happens when one person’s theory begins to lose out under the framework is a different one, but by that stage, if enough people are following the epistemic framework, opting out may be socially impossible (e.g. if a famous scientist said “my theory has been falsified by experiment, so I am abandoning the scientific method!”, they would be a laughing stock)]
3. I really worry that “everyone on Earth is irrational, apart from me and my mates” is an incredibly gratifying and tempting position to hold. The romance of the lone point of light in an ocean of darkness! The drama of leading the fight to begin civilisation itself! The thrill of the hordes of Dark Side Epistemologists, surrounding the besieged outpost of reason! Who would not be tempted? I certainly am. But that is why I suspect.
I wonder: Whether a world “with lots of high-grade rationalists” necessarily is a friendly world. I doubt it. So I think rationality has to be tempered with something else. Let’s just call it “the milk of human kindness”.
How did you manage that!? What I want to know is what were the 3 people who downvoted my humorous comment thinking? Maybe 3 out of all the 10 or so people still reading this thread actually thought I was serious and downvoted me for ingroup bias? Or maybe people think that humor is a no-no on LW? I can see how too much humor would dilute the debate. Writing humorous comments is fun, and probably good in small amounts, but if it caught on this could turn into a social space rather than an intellectual one…
It doesn’t take much—just one jerk systematically downvoting a page or two of your existing comments. I lost like 37 points in less than an hour that way a few days ago. We really need separate up/down counts, or better yet ups and downs per voter, so you can ignore systematic friend upvotes and foe downvotes.
Couldn’t it also be due to a change in the karma calculation rules in order to, say, not take your own upvote in account on karma calculations? I remember that was mentioned, but don’t know if it was implemented in the meantime.
Edit: Well, it seems that it isn’t implemented yet, since posting this got me a karma point :)
By principle of charity, I interpret Marshall as saying not that rationalists can’t be kind, but that rationalism alone doesn’t make you kind. Judging by my informal torture vs. pie experiments, I find this to be true. Rationality is necessary but not sufficient for a friendly world. We also need people who value the right kind of things. Rationality can help clarify and amplify morality, but it’s got to start from pre-rational sources. Until further research is done, I suggest making everyone watch a lot of Thundercats and seeing whether that helps :)
Of course, like with every use of the principle of charity, I might just be reading too much into a statement that really was stupid.
Your torture vs. pie experiment makes me think of another potential experiment. Is torture ever preferable to making, say, 3^^^3 people never have pie again? (In the sense of dust specks, the never eating pie is to be the entire consequence of the action. The potential pie utility is just gone, nothing else.)
“Marshall doesn’t have to be voted down for being wrong. He can be voted down for using an applause light and being vague”
I have stared at this sentence for a long time, and I have wondered and wondered. I too have read my comments again. They are not vague. Not in the slightest. I think they belong to a slightly other reference-set than the other postings and emphasize language as metaphor (which I think Eliezer calls appealing to applause lights).
I would call Eliezers quoted sentence as brutal. Majesticaly brutal—and I would think they have contributed to the 23 karma-points I lost in 12 hours of non-activity.
I have no wish to be a member of a club, who will not have me. I have no wish to be a member of a club with royal commands.
“I have no wish to be a member of a club, who will not have me.”
This is not the case. You’ve made over 30 comments; it’s trivial for an individual to swing your karma by large amounts. I note that your karma has made large swings in the ~30 minutes I’ve been considering this reply. If you want to discuss the group dynamics of LW then I have more to say, but I’m going to request (temporarily) that you don’t accuse me of groupthink or status seeking if you do.
Yes, it would be silly to think of ourselves as isolated survivalists in a society where so many people are signed up for cryonics, where Many-Worlds was seen as retrospectively obvious as soon as it was proposed, and no one can be elected to public office if they openly admit to believing in God. But let us be realistic about which Earth we actually live in.
I too am greatly interested in group mechanisms of rationality—though I admit I put more emphasis on individuals; I suspect you can build more interesting systems out of smarter bricks. The obstacles are in many ways the same: testing the group, incentivizing the people in it. In most cases if you can test a group you can test an individual and vice versa.
But any group mechanism of that sort will have the character of a band of survivalists getting together to grow carrots. Prediction markets are lonely outposts of light in a world that isn’t so much “gone dark” as having never been illuminated to begin with; and the Policy Analysis Markets were burned by a horde of outraged barbarians.
We have always been in the Post-Apocalyptic Rationalist Environment, where even scientists and academics are doing it wrong and Dark Side Epistemology howls through the street; I don’t even angst about this, I just take it for granted. Any proposals for getting a civilization started need to take into account that it doesn’t already exist.
Sounds like you do think of yourself as an isolated survivalist in a world of aliens with which you cannot profitably coordinate. Let us know if you find those more interesting systems you suspect can be built from smarter bricks.
It’s pretty hard to be isolated in a world of six billion people. The more key question is the probability of coordinating with any randomly selected person on a rationalist topic of fixed difficulty, and the total size of the community available to support some number of institutions.
To put it bluntly, if you built the ideal rationalist institution that requires one million supporters, you’d be in trouble because the 99.98th percentile of rationality is not adequate to support it (and also such rationalists may have other demands on their time).
But if you can build institutions that grow starting from small groups even in a not-previously-friendly environment, or upgrade rationalists starting from the 98th percentile to what we would currently regard as much higher levels, then odds look better for such institutions.
We both want to live in a friendly world with lots of high-grade rationalists and excellent institutions with good tests and good incentives, but I don’t think I already live there.
Even in the most civilized civilizations, barbarity takes place on a regular basis. There are some homicides in dark alleys in the safest countries on earth, and there are bankruptcies, poverty, and layoffs even in the richest countries.
In the same way, we live in a flawed society of reason, which has been growing and improving with starts and fits since the scientific revolution. We may be civilized in the arena of reason in the same way you could call Northern Europe in the 900s civilized in the arena of personal security: there are rules that nearly everyone knows and that most obey to some extent, but they are routinely disrespected, and the only thing that makes people really take heed is the theater of enforcement, whether that’s legally-sanctioned violence against notorious bandits or a dressing-down of notorious sophists.
Right now, we are only barely scraping together a culture of rationality, it may have a shaky foundation and many dumber bricks, but it seems a bit much to say we don’t have one.
Let me us distinguish “truth-seekers”, people who respect and want truth, from “rationalists”, people who personally know how to believe truth. We can build better institutions that produce truth if only we have enough support from truth-seekers; we don’t actually need many rationalists. And having rationalists without good institutions may not produce much more shared accessible truth.
I’m not sure I can let you make that distinction without some more justification.
Most people think they’re truth-seekers and honestly claim to be truth-seekers. But the very existence of biases shows that thinking you’re a truth-seeker doesn’t make it so. Ask a hundred doctors, and they’ll all (without consciously lying!) say they’re looking for the truth about what really will help or hurt their patients. But give them your spiel about the flaws in the health system, and in the course of what they consider seeking the truth, they’ll dismiss your objections in a way you consider unfair. Build an institution that confirms your results, and they’ll dismiss the institution as biased or flawed or “silly”. These doctors are not liars or enemies of truth or anything. They’re normal people whose search for the truth is being hijacked in ways they can’t control.
The solution: turn them into rationalists. They don’t have to be black belt rationalists who can derive Bayes’ Theorem in their sleep, but they have to be rationalist enough that their natural good intentions towards truth-seeking correspond to actual truth-seeking and allow you to build your institutions without interference.
“The solution: turn them into rationalists.”
You don’t say how to accomplish this. Would it require (or at least benefit greatly from) institutional change?
I had in mind that you might convince someone abstractly to support eg prediction markets because they promote truth, and then they would accept the results of such markets even if it disagreed with their intuitions. They don’t have to know how to bet well in such markets to accept that they are a better truth-seeking institution. But yes, being a truth-seeker can be very different from believing that you are one.
Btw, I only just discovered the “inbox” that lets me find responses to my comments.
This sounds like you’re postulating people who have good taste in rationalist institutions without having good taste in rationality. Or you’re postulating that it’s easy to push on the former quantity without pushing on the latter. How likely is this really? Why wouldn’t any such effort be easily hijacked by institutions that look good to non-rationalists?
Eliezer, to the extent that any epistemic progress has been made at all, was it not ever thus?
To give one example: the scientific method is an incredibly powerful tool for generating knowledge, and has been very widely accepted as such for the past two centuries.
But even a cursory reading of the history of science reveals that scientists themselves, despite having great taste in rationalist institutions, often had terrible taste in personal rationality. They were frequently petty, biased, determined to believe their own theories regardless of evidence, defamatory and aggressive towards rival theorists, etc.
Ultimately, their taste in rational institutions coexisted with frequent lack of taste in personal rationality (certainly, a lack of Eliezer-level taste in personal rationality). It would have been better, no doubt, if they had had both tastes. But they didn’t. But in the end, it wasn’t necessary that they did.
I would also make some other points:
1. People tend to have stronger emotive attachments—and hence stronger biases—in relation to concrete issues (e.g. “is the theory I believe correct”) than epistemic institutions (e.g. “should we do an experiment to confirm the theory”). One reason is that such object level issues tend to be more politicised. Another is that they tend to have a more direct, concrete impact on individual lives (N.B. the actual impact of epistemic institutions is probably much greater, but for triggering our biases, the appearance of direct action is more important (cf thought experiments about sacrificing a single identifiable child to save faceless millions)).
2. Even very object-level biased people can be convinced to follow the same institutional epistemic framework. After all, if they are convinced that the framework is a truth-productive one, they will believe it will ultimately vindicate their theory. I think this is a key reason why competing ideologies agree to free speech, why competing scientists agree to the scientific method, why (by analogy) competing companies agree to free trade, etc.
[The question of what happens when one person’s theory begins to lose out under the framework is a different one, but by that stage, if enough people are following the epistemic framework, opting out may be socially impossible (e.g. if a famous scientist said “my theory has been falsified by experiment, so I am abandoning the scientific method!”, they would be a laughing stock)]
3. I really worry that “everyone on Earth is irrational, apart from me and my mates” is an incredibly gratifying and tempting position to hold. The romance of the lone point of light in an ocean of darkness! The drama of leading the fight to begin civilisation itself! The thrill of the hordes of Dark Side Epistemologists, surrounding the besieged outpost of reason! Who would not be tempted? I certainly am. But that is why I suspect.
I wonder: Whether a world “with lots of high-grade rationalists” necessarily is a friendly world. I doubt it. So I think rationality has to be tempered with something else. Let’s just call it “the milk of human kindness”.
I’m surprised to see this go negative.
Granted, Marshall didn’t explain his position in any detail. But his position is not indefensible, and I’m glad he’s willing to share it.
Downvote this heretic! I wannt see him on −50 Karma, dammit! ;-0
Thanks Roko—nice with a bit of humour—btw your wish is almost granted I’ve lost 23 points in the space of 12 hours. Rationalists are fun people.....
How did you manage that!? What I want to know is what were the 3 people who downvoted my humorous comment thinking? Maybe 3 out of all the 10 or so people still reading this thread actually thought I was serious and downvoted me for ingroup bias? Or maybe people think that humor is a no-no on LW? I can see how too much humor would dilute the debate. Writing humorous comments is fun, and probably good in small amounts, but if it caught on this could turn into a social space rather than an intellectual one…
It doesn’t take much—just one jerk systematically downvoting a page or two of your existing comments. I lost like 37 points in less than an hour that way a few days ago. We really need separate up/down counts, or better yet ups and downs per voter, so you can ignore systematic friend upvotes and foe downvotes.
Are we already getting this behavior? I’ll have to start looking into voting patterns… Sigh.
Have you looked at Raph Levien’s work on attack resistant trust metrics?
Couldn’t it also be due to a change in the karma calculation rules in order to, say, not take your own upvote in account on karma calculations? I remember that was mentioned, but don’t know if it was implemented in the meantime.
Edit: Well, it seems that it isn’t implemented yet, since posting this got me a karma point :)
If your picture of a high-grade rationalist is still this Spock crap, what are you doing here?
By principle of charity, I interpret Marshall as saying not that rationalists can’t be kind, but that rationalism alone doesn’t make you kind. Judging by my informal torture vs. pie experiments, I find this to be true. Rationality is necessary but not sufficient for a friendly world. We also need people who value the right kind of things. Rationality can help clarify and amplify morality, but it’s got to start from pre-rational sources. Until further research is done, I suggest making everyone watch a lot of Thundercats and seeing whether that helps :)
Of course, like with every use of the principle of charity, I might just be reading too much into a statement that really was stupid.
Your torture vs. pie experiment makes me think of another potential experiment. Is torture ever preferable to making, say, 3^^^3 people never have pie again? (In the sense of dust specks, the never eating pie is to be the entire consequence of the action. The potential pie utility is just gone, nothing else.)
By the principle of accuracy, I look up Marshall’s other comments: http://lesswrong.com/user/Marshall/
Marshall doesn’t have to be voted down for being wrong. He can be voted down for using an applause light and being vague.
“Marshall doesn’t have to be voted down for being wrong. He can be voted down for using an applause light and being vague.”
So can Eliezer_Yudkowsky.
“Marshall doesn’t have to be voted down for being wrong. He can be voted down for using an applause light and being vague”
I have stared at this sentence for a long time, and I have wondered and wondered. I too have read my comments again. They are not vague. Not in the slightest. I think they belong to a slightly other reference-set than the other postings and emphasize language as metaphor (which I think Eliezer calls appealing to applause lights).
I would call Eliezers quoted sentence as brutal. Majesticaly brutal—and I would think they have contributed to the 23 karma-points I lost in 12 hours of non-activity.
I have no wish to be a member of a club, who will not have me. I have no wish to be a member of a club with royal commands.
“I have no wish to be a member of a club, who will not have me.”
This is not the case. You’ve made over 30 comments; it’s trivial for an individual to swing your karma by large amounts. I note that your karma has made large swings in the ~30 minutes I’ve been considering this reply. If you want to discuss the group dynamics of LW then I have more to say, but I’m going to request (temporarily) that you don’t accuse me of groupthink or status seeking if you do.
Putting so much work into talking about these things isn’t the act of an isolated survivalist, though.