This is an excellent criticism of the rationality movement / community. We should have more self-aware analysis like this! Needless to say, I agree entirely with the implied point (that the Visitor’s arguments, being not actually sourced from a powerful galactic civilization and possibly malicious even if they were, do not hold up, and that his attempts to exhort his human interlocutor to action could, without modification, come from almost any movement or ideology, regardless of its actual value).
The implied follow-up question, if I am reading correctly, is: how, then, do we differentiate ourselves? How do we (second) convince others, and (first) establish for ourselves, that we’re different? What can we offer to prospective joiners that cannot be offered by other movements (i.e., what can we offer that constitutes an unfalsifiable signal that we are the “true path” to the “good ending”, so to speak)?
I felt like the OP was already quite long enough, and don’t have time now to write the full followup post that this question deserves, but in brief, the thrust would be that any rationalist organization deserving of the name would carefully choose its norms, structure and bylaws to reflect those of the most successful existing organizations (empiricism!), with care taken to exclude the aspects of those organizations that are inimical to group or individual rationality. Thus, even if stoning apostates has proven to be an empirically useful organizational strategy from the perspective of growth, it’s probably not something we want to emulate.
I’m not sure if we can actually offer an unfalsifiable signal that we are on the “true path”. I’m not sure if we even necessarily need or want to do that. In order to justify the existence of the “Don’t Shoot Yourself in the Foot Club”, you just need to demonstrate that not shooting yourself in the foot is better than the alternative, and I think we can do at least that, metaphorically.
Also, I actually suspect that any formal structure at all would probably be, on net, more of a good thing than a bad thing, in terms of growing the movement.
In order to justify the existence of the “Don’t Shoot Yourself in the Foot Club”, you just need to demonstrate that not shooting yourself in the foot is better than the alternative, and I think we can do at least that, metaphorically.
Well, now, this is not quite right, I think, or rather it’s incomplete. What’s missing (and what I suspect you were assuming—but should definitely be stated explicitly!) is that the members of the “Don’t Shoot Yourself in the Foot Club” should, in actual fact, successfully avoid shooting themselves in the foot (or, to state it in a less binary fashion: should shoot themselves in the foot measurably less than non-members).
This may seem like an obvious point, but in fact there is nothing surprising about a “Do X Club” that doesn’t do X. After all, having the power of telekinesis is clearly better than not having the power of telekinesis, but I do not think that this fact suffices to justify the existence of a “Have the Power of Telekinesis Club”!
Again, this ought to be stated explicitly, precisely because “does your ‘Do X Club’ actually do X” is an empirical, and very much open, question.
Also, I actually suspect that any formal structure at all would probably be, on net, more of a good thing than a bad thing, in terms of growing the movement.
I wonder about this. Is the average Christian more “Christian” than the average non-Christian? (Do they do good works for strangers, love and forgive their enemies, and live lives of poverty and service, at rates significantly above the population average?) If not, does that really affect their ability to grow? Has it really affected their ability to grow, historically?
Is that who we want to emulate? Christianity? (Or, perhaps, the largest single Christian organization—the Catholic Church? Of all the times to pick the Catholic Church as a role model, now seems to be an unusually bad time to do so…)
And is “growth” our only, or our most important, goal?
My point was merely that you can found a club around an aspiration rather than an accomplishment. It’s better to have the accomplishment, of course, but not necessary.
But let’s follow your reasoning—and the analogy with Christianity—a bit further.
Christianity was founded around an aspiration: to be more Christian (defined as “do good works for strangers, loving and forgiving their enemies, and living lives of poverty and service”) than people who aren’t members.
They survived, and grew—very successfully.
Let us take the implication in your question for granted, and stipulate that the average Christian today is not more Christian than the average non-Christian.
The aspiration around which Christianity was founded, would seem to not have been attained.
By analogy, you propose to found a club around an aspiration: to be more rational than people who aren’t members of the club.
It seems plausible that, having done this, our club can successfully survive and grow.
However—assuming the pattern holds (and we have no particular reason to think that it won’t)—members of our club will not be more rational than non-members. Our aspiration will not have been attained.
In short, this reasoning seems to endorse the following trade-off: survive and grow by sacrificing your goals.
I think you might be overlooking the widespread cultural effects of Christian memes. When I had a similar discussion with a friend I argued “imagine a society in which the 12 Virtues had the place the 10 Commandments (or maybe the Beatitudes) do in ours”.
Not everyone or even most people actually _follow_ the 10 Commandments and it is debatable whether Christians follow them any more frequently than non-Christians but if you compare a ours to a society that had basically _never heard_ of the 10 Commandments I think it is hard to imagine that other society would have more Commandment-followers.
Christian memes are _absurdly_ pervasive in the Western canon to the point where historically even secularists conducted their intellectual discourse in Christian ideas.
Consider a world in which children’s literature is filled with rationalist ideas and Good Moral Teaching is all about being a good rationalist and even anti-rationalists have to define themselves on the terms of the rationalists in order to be an effective counter-movement and most people _know_ they’re supposed to Make Their Beliefs Pay Rent and Destroy What Can Be Destroyed By The Truth even if they don’t bother to actually do so most of the time.
I would expect this world to actually be more rational, on net, than our own. In fact, I think that if such a world is _not_ more rational then it is a damning indictment of group rationalism in general and possibly evidence that the whole affair to be a waste of energy.
I have a guess as to how this would actually evolve.
While the median Christian is not particularly Christian, there probably are a good number of pretty excellent Christians, whose motivation for being thus is their commitment to the ideals that they profess. So it’s possible—even likely—that Christianity actually makes the world a little bit more “in the image of Christ” on the margin.
If you have a billion Christians, the number of “actually pretty good” Christians is likely to be pretty high.
Right now we probably have barely thousands of Rationalists who would identify as such. An organized attempt at increasing that number, with a formal aspiration to be better rationalists, would increase the number of “actually pretty good” rationalists, although the median rationalist might just be somebody who read 4% of the Sequences and went to two meetups. But that would still be a win.
Hmm. Well, certainly the full follow-up would be a tremendously valuable thing to have, so whenever you have the time to write it, I definitely think that you should!
But for now:
… any rationalist organization deserving of the name would carefully choose its norms, structure and bylaws to reflect those of the most successful existing organizations (empiricism!), with care taken to exclude the aspects of those organizations that are inimical to group or individual rationality.
Hm, indeed. Obvious follow-up questions:
By “rationality”, here, do you mean epistemic or instrumental rationality? Or both? (And if “both”, which is prioritized if they conflict? Or, must aspects of successful organizations that are inimical to either epistemic or instrumental rationality, and either group or individual versions of each, all be excluded?)
What should an aspiring rationalist organization do if, upon empirical investigation, it turns out that the norms, structure, and bylaws shared by all the most successful existing organizations turn out to all be inimical to group or individual rationality?
Regarding both follow-up questions, I have two answers:
Answer 1: I don’t intend for this to be a dodge, but I don’t think it really matters what I think. I don’t think it’s practical to construct “the perfect organization” in our imagination and then anticipate that its perfection will be realized.
I think what a rationality organization looks like in practice is a small group of essentially like-minded people creating a Schelling point by forming the initial structure, and then the organization evolves from there in ways that are not necessarily predictable, in ways that reflect the will of the people who have the energy to actually put into the thing.
What’s interesting is that when I say it that way, I realize that it sounds like a recipe for disaster. But also note that essentially no other organization on Earth has been formed in any other way.
Answer 2: I personally would create separate organizational branches for epistemic and instrumental focus, such that both could use the resources of the other, but neither would be constrained by the rules of the other. Either branch could use whatever policies are most suited to themselves. Think two houses of a congress. Either of the branches could propose policies to govern the whole organization, which could be accepted or vetoed by the other branch. There’s probably also a role for something like an elected executive branch, but at this point I am grasping well beyond my domain of expertise.
What’s interesting is that when I say it that way, I realize that it sounds like a recipe for disaster. But also note that essentially no other organization on Earth has been formed in any other way.
I’m not so sure about that. Perhaps you meant “no other [freestanding] organization on Earth has been formed [by people without access to massive amounts of resources] any other way”.
To elaborate on the distinction, I can point to organizations like NASA, or the Manhattan Project. They were created, from the beginning, as large groups, with pre-planned bureaucratic structures and formal lines of authority. While there was a certain level of organic growth, it’s not like these things started in a garage and grew outward from there. Similarly, in private industry, when IBM embarked on its OS/360 project, or Microsoft embarked on Windows Vista, these were not small efforts started by a group of insurgents. Rather, they were responses to concrete opportunities/threats (a new mainframe, Netscape) that were identified by the leadership of the parent organization, who then mobilized the appropriate resources.
I think this matters beyond mere pedantry. You’ve identified one way to spread rationalism—bottom up, by establishing a small group of rationalists, who then spread their doctrine outwards. I’m saying there’s another way: identify leaders, convince them that rationality is a good thing to focus on, and then have them mobilize the appropriate resources to spread rationality. If you could convince Bill Gates, Jeff Bezos, Warren Buffett and Carlos Slim that they should fund rationality with the full force of their collective fortune, then that would potentially do more in a year to spread rationalism than a small organization organically growing for a decade.
Of course, “go big or go home” has its own failure modes, but it’s not actually self-evident that it’s more risky than starting small and spreading outwards. Moreover, the really successful small groups employed a hybrid strategy. They started out as a small group, until the could amass enough resources and prestige to convince influential decision-makers that their cause was worth supporting. The canonical (pun fully intended) example is the Catholic Church, of course. It started as a small, often persecuted group of followers of a particular religious prophet, indistinguishable from the other Jewish spin-offs. However, through steady proselytizing, it grew and converted the aristocracy of the Roman empire. At that point, from the the conversion of Constantine onwards, it became the state religion, and spread rapidly wherever the Roman empire held sway.
I think MIRI also employed a hybrid strategy. I will say, it seems much easier to deploy a “go big or go home” approach after you’ve already created a minimum viable organization, rather than attempting to poach thinkfluencers without even having that groundwork in place.
How do we (second) convince others, and (first) establish for ourselves, that we’re different? What can we offer to prospective joiners that cannot be offered by other movements (i.e., what can we offer that constitutes an unfalsifiable signal that we are the “true path” to the “good ending”, so to speak)?
I came to this article having just read one about Donald Trump’s response to the 9/11 attacks, which mentioned that Trump saw them from the window of his apartment. The WTC attacks happened at around 9 AM, the start of the standard workday; but he had decided to stay in his apartment later than usual to catch a TV interview with Jack Welch, the former CEO of General Electric.
I thought that was interesting. Welch is well-known in the business world, and at least was once well-regarded. I have one of his books, although I haven’t read it yet.
Now, the problem of how to convince people to pay attention to a memeplex is a problem Less Wrong has. Jack Welch, not so much. I saw his book at a thrift store, had some idea of who he was, and figured it’d be worthwhile to buy it. Donald Trump heard that he’d be on TV, knew well (I assume) who he was, and figured it’d be worthwhile to watch the interview. We aren’t on TV.
Why not?
Maybe it’s because we aren’t Jack Welch.
We’ve all read our Aristotle, right? Our marketers come up with plenty of logos and pathos. Ethos, not so much. But it worked for Jack Welch...
There’s an important difference between the alien’s initial sales pitch and the problem of recruiting people to Less Wrong. The alien is a representative of an advanced civilization, offering a manual for uplifting the human race—so there’s a solution to widely advertising it that will only work if the manual does: simply distribute the manual to a few hundred people around the world who are highly motivated to do well in life. Once they’ve learned it, applied its contents, and become wildly successful CEOs of General Electric or whatever, some of them will (almost certainly) make it known that their success is due to their mastery of the contents of a book...
But the book doesn’t actually exist, we aren’t hot-shit enough to recruit through ethos (why not? could it be that we’re failing? could it be that we’re failing so badly that our startups try to write their own payroll software?), and our sales pitches are pretty bad. I noticed so many of our quality people leaving, and so much lack of interest in *actually winning*, that I stopped paying attention myself—I only saw this post because it was linked on Twitter.
Before asking what LW can offer to prospective joiners that can’t be offered by other movements, ask if it *has* anything like that. I don’t think it does, and I don’t think it’s in a position to get there.
Is it possible to construct a movement which accomplishes our goals, but doesn’t have the failings you describe?
Suppose I am an aspiring rationalist, and would like to do whatever I can to help bring about the existence, and ensure the success, of such a movement. What ought I to do?
(Note that the phrasing of #2—namely, the fact that it’s phrased as a question of individual action—is absolutely critical. It does no good whatsoever to ask what “we” should do. “We” cannot decide to do anything; individuals choose, and individuals act.)
P.S.: It might be necessary also to preface these two questions with a question #0: “What actually are our goals?” (The OP does not quite make it clear—which may or may not be intended.)
0. Are “we” the sort of thing that can have goals? It looks to me like there are a lot of goals going around, and LW isn’t terribly likely to agree on One True Set of Goals, whether ultimate or proximate.
I think one of the neglected possible roles for LW is as a beacon—a (relatively) highly visible institution that draws in people like-minded enough that semirandom interactions are more likely to be productive than semirandom interactions in the ‘hub world’, and allows them to find people sufficiently like-minded that they can then go off and do their own thing, while maintaining a link to LW itself, if only to search it for potential new members of this own thing.
My impression of internet communities in general is that they tend to be like this, and I don’t see any reason to expect LW to be different. Take Newgrounds, another site formed explicitly around productive endeavors (which has the desirable (for my purposes here) property that I spent my middle school years on it): it spawned all sorts of informal friend groups and formal satellite forums, each with its own sort of productive endeavor it was interested in. There was an entire ecosystem of satellite forums (and AIM/MSN group chats, which sometimes spawned satellite forums), from prolific NG forum posters realizing they had enough clout to start their own forum so why not, to forums for people interested in operating within the mainstream tradition of American animation, to a vast proliferation of forums for ‘spammers’ who were interested in playing with NG itself as a medium, to forums for people who were interested in making one specific form of movie—wacky music videos, video game sprite cartoons, whatever. And any given user could be in multiple of these groups, depending on their interests—I was active on at least one forum in each of the categories I’ve listed.
(As an aside: I say ‘spammers’ because that’s what they were called, but later on I developed enough interest in the art world to realize that there’s really no difference between what we did and what they’re doing. (The ‘art game’ people would do well to recognize this—they’re just trolls, but trolling is a art, so what the hell.) There were also ‘anti-spam’ forums, but I brought some of them around.)
1. As for classical LW goals, the AI problem does seem to have benefited quite a bit by ethos arguments. I’m not sure if “our goals” is even the type of noun phrase that *can* have semantic content, but cultivating general quality seems like a fairly broad goal. A movement that wants to gain appeal in the ways I’ve outlined will want its members to be visibly successful at instrumental rationality, and be fine upstanding citizens and so on.
2. I don’t think I’m smarter than Ben Franklin, so my advice for now would be to just do what he did. At a higher level: study successful people with well-known biographies and see if there’s anything that can be abstracted out. I notice (because Athrelon pointed it out a while ago) that Ben Franklin, C.S. Lewis, Tolkien, Thiel, and Musk have one thing in common: the benefit of a secret society or something like it—the Junto, the Inklings, or the Paypal Mafia.
I think there is something like a Platonic “ultimate textbook of human rationality” that may be written in the future, but we don’t actually know its contents. That’s why the visitor can’t give us the book. We have a dual problem: not only the challenge of spreading the ideas, but actually pinning down what the ideas are in the first place.
Actually, I think “pinning down” has entirely the wrong connotations, because human rationality seems more like a living and breathing process rather than a list of maxims chiseled in stone, and to a degree culturally dependent.
I will say that I don’t think you need to answer #0 concretely before you set out. We can guess at the contents of the Platonic rationality textbook, and then iterate as we converge upon it.
And a substantive comment:
This is an excellent criticism of the rationality movement / community. We should have more self-aware analysis like this! Needless to say, I agree entirely with the implied point (that the Visitor’s arguments, being not actually sourced from a powerful galactic civilization and possibly malicious even if they were, do not hold up, and that his attempts to exhort his human interlocutor to action could, without modification, come from almost any movement or ideology, regardless of its actual value).
The implied follow-up question, if I am reading correctly, is: how, then, do we differentiate ourselves? How do we (second) convince others, and (first) establish for ourselves, that we’re different? What can we offer to prospective joiners that cannot be offered by other movements (i.e., what can we offer that constitutes an unfalsifiable signal that we are the “true path” to the “good ending”, so to speak)?
I felt like the OP was already quite long enough, and don’t have time now to write the full followup post that this question deserves, but in brief, the thrust would be that any rationalist organization deserving of the name would carefully choose its norms, structure and bylaws to reflect those of the most successful existing organizations (empiricism!), with care taken to exclude the aspects of those organizations that are inimical to group or individual rationality. Thus, even if stoning apostates has proven to be an empirically useful organizational strategy from the perspective of growth, it’s probably not something we want to emulate.
I’m not sure if we can actually offer an unfalsifiable signal that we are on the “true path”. I’m not sure if we even necessarily need or want to do that. In order to justify the existence of the “Don’t Shoot Yourself in the Foot Club”, you just need to demonstrate that not shooting yourself in the foot is better than the alternative, and I think we can do at least that, metaphorically.
Also, I actually suspect that any formal structure at all would probably be, on net, more of a good thing than a bad thing, in terms of growing the movement.
Well, now, this is not quite right, I think, or rather it’s incomplete. What’s missing (and what I suspect you were assuming—but should definitely be stated explicitly!) is that the members of the “Don’t Shoot Yourself in the Foot Club” should, in actual fact, successfully avoid shooting themselves in the foot (or, to state it in a less binary fashion: should shoot themselves in the foot measurably less than non-members).
This may seem like an obvious point, but in fact there is nothing surprising about a “Do X Club” that doesn’t do X. After all, having the power of telekinesis is clearly better than not having the power of telekinesis, but I do not think that this fact suffices to justify the existence of a “Have the Power of Telekinesis Club”!
Again, this ought to be stated explicitly, precisely because “does your ‘Do X Club’ actually do X” is an empirical, and very much open, question.
I concur with this wholeheartedly.
I wonder about this. Is the average Christian more “Christian” than the average non-Christian? (Do they do good works for strangers, love and forgive their enemies, and live lives of poverty and service, at rates significantly above the population average?) If not, does that really affect their ability to grow? Has it really affected their ability to grow, historically?
Is that who we want to emulate? Christianity? (Or, perhaps, the largest single Christian organization—the Catholic Church? Of all the times to pick the Catholic Church as a role model, now seems to be an unusually bad time to do so…)
And is “growth” our only, or our most important, goal?
My point was merely that you can found a club around an aspiration rather than an accomplishment. It’s better to have the accomplishment, of course, but not necessary.
But let’s follow your reasoning—and the analogy with Christianity—a bit further.
Christianity was founded around an aspiration: to be more Christian (defined as “do good works for strangers, loving and forgiving their enemies, and living lives of poverty and service”) than people who aren’t members.
They survived, and grew—very successfully.
Let us take the implication in your question for granted, and stipulate that the average Christian today is not more Christian than the average non-Christian.
The aspiration around which Christianity was founded, would seem to not have been attained.
By analogy, you propose to found a club around an aspiration: to be more rational than people who aren’t members of the club.
It seems plausible that, having done this, our club can successfully survive and grow.
However—assuming the pattern holds (and we have no particular reason to think that it won’t)—members of our club will not be more rational than non-members. Our aspiration will not have been attained.
In short, this reasoning seems to endorse the following trade-off: survive and grow by sacrificing your goals.
Is that what you really want?
I think you might be overlooking the widespread cultural effects of Christian memes. When I had a similar discussion with a friend I argued “imagine a society in which the 12 Virtues had the place the 10 Commandments (or maybe the Beatitudes) do in ours”.
Not everyone or even most people actually _follow_ the 10 Commandments and it is debatable whether Christians follow them any more frequently than non-Christians but if you compare a ours to a society that had basically _never heard_ of the 10 Commandments I think it is hard to imagine that other society would have more Commandment-followers.
Christian memes are _absurdly_ pervasive in the Western canon to the point where historically even secularists conducted their intellectual discourse in Christian ideas.
Consider a world in which children’s literature is filled with rationalist ideas and Good Moral Teaching is all about being a good rationalist and even anti-rationalists have to define themselves on the terms of the rationalists in order to be an effective counter-movement and most people _know_ they’re supposed to Make Their Beliefs Pay Rent and Destroy What Can Be Destroyed By The Truth even if they don’t bother to actually do so most of the time.
I would expect this world to actually be more rational, on net, than our own. In fact, I think that if such a world is _not_ more rational then it is a damning indictment of group rationalism in general and possibly evidence that the whole affair to be a waste of energy.
I have a guess as to how this would actually evolve.
While the median Christian is not particularly Christian, there probably are a good number of pretty excellent Christians, whose motivation for being thus is their commitment to the ideals that they profess. So it’s possible—even likely—that Christianity actually makes the world a little bit more “in the image of Christ” on the margin.
If you have a billion Christians, the number of “actually pretty good” Christians is likely to be pretty high.
Right now we probably have barely thousands of Rationalists who would identify as such. An organized attempt at increasing that number, with a formal aspiration to be better rationalists, would increase the number of “actually pretty good” rationalists, although the median rationalist might just be somebody who read 4% of the Sequences and went to two meetups. But that would still be a win.
Hmm. Well, certainly the full follow-up would be a tremendously valuable thing to have, so whenever you have the time to write it, I definitely think that you should!
But for now:
Hm, indeed. Obvious follow-up questions:
By “rationality”, here, do you mean epistemic or instrumental rationality? Or both? (And if “both”, which is prioritized if they conflict? Or, must aspects of successful organizations that are inimical to either epistemic or instrumental rationality, and either group or individual versions of each, all be excluded?)
What should an aspiring rationalist organization do if, upon empirical investigation, it turns out that the norms, structure, and bylaws shared by all the most successful existing organizations turn out to all be inimical to group or individual rationality?
Regarding both follow-up questions, I have two answers:
Answer 1: I don’t intend for this to be a dodge, but I don’t think it really matters what I think. I don’t think it’s practical to construct “the perfect organization” in our imagination and then anticipate that its perfection will be realized.
I think what a rationality organization looks like in practice is a small group of essentially like-minded people creating a Schelling point by forming the initial structure, and then the organization evolves from there in ways that are not necessarily predictable, in ways that reflect the will of the people who have the energy to actually put into the thing.
What’s interesting is that when I say it that way, I realize that it sounds like a recipe for disaster. But also note that essentially no other organization on Earth has been formed in any other way.
Answer 2: I personally would create separate organizational branches for epistemic and instrumental focus, such that both could use the resources of the other, but neither would be constrained by the rules of the other. Either branch could use whatever policies are most suited to themselves. Think two houses of a congress. Either of the branches could propose policies to govern the whole organization, which could be accepted or vetoed by the other branch. There’s probably also a role for something like an elected executive branch, but at this point I am grasping well beyond my domain of expertise.
I’m not so sure about that. Perhaps you meant “no other [freestanding] organization on Earth has been formed [by people without access to massive amounts of resources] any other way”.
To elaborate on the distinction, I can point to organizations like NASA, or the Manhattan Project. They were created, from the beginning, as large groups, with pre-planned bureaucratic structures and formal lines of authority. While there was a certain level of organic growth, it’s not like these things started in a garage and grew outward from there. Similarly, in private industry, when IBM embarked on its OS/360 project, or Microsoft embarked on Windows Vista, these were not small efforts started by a group of insurgents. Rather, they were responses to concrete opportunities/threats (a new mainframe, Netscape) that were identified by the leadership of the parent organization, who then mobilized the appropriate resources.
I think this matters beyond mere pedantry. You’ve identified one way to spread rationalism—bottom up, by establishing a small group of rationalists, who then spread their doctrine outwards. I’m saying there’s another way: identify leaders, convince them that rationality is a good thing to focus on, and then have them mobilize the appropriate resources to spread rationality. If you could convince Bill Gates, Jeff Bezos, Warren Buffett and Carlos Slim that they should fund rationality with the full force of their collective fortune, then that would potentially do more in a year to spread rationalism than a small organization organically growing for a decade.
Of course, “go big or go home” has its own failure modes, but it’s not actually self-evident that it’s more risky than starting small and spreading outwards. Moreover, the really successful small groups employed a hybrid strategy. They started out as a small group, until the could amass enough resources and prestige to convince influential decision-makers that their cause was worth supporting. The canonical (pun fully intended) example is the Catholic Church, of course. It started as a small, often persecuted group of followers of a particular religious prophet, indistinguishable from the other Jewish spin-offs. However, through steady proselytizing, it grew and converted the aristocracy of the Roman empire. At that point, from the the conversion of Constantine onwards, it became the state religion, and spread rapidly wherever the Roman empire held sway.
I think MIRI also employed a hybrid strategy. I will say, it seems much easier to deploy a “go big or go home” approach after you’ve already created a minimum viable organization, rather than attempting to poach thinkfluencers without even having that groundwork in place.
I came to this article having just read one about Donald Trump’s response to the 9/11 attacks, which mentioned that Trump saw them from the window of his apartment. The WTC attacks happened at around 9 AM, the start of the standard workday; but he had decided to stay in his apartment later than usual to catch a TV interview with Jack Welch, the former CEO of General Electric.
I thought that was interesting. Welch is well-known in the business world, and at least was once well-regarded. I have one of his books, although I haven’t read it yet.
Now, the problem of how to convince people to pay attention to a memeplex is a problem Less Wrong has. Jack Welch, not so much. I saw his book at a thrift store, had some idea of who he was, and figured it’d be worthwhile to buy it. Donald Trump heard that he’d be on TV, knew well (I assume) who he was, and figured it’d be worthwhile to watch the interview. We aren’t on TV.
Why not?
Maybe it’s because we aren’t Jack Welch.
We’ve all read our Aristotle, right? Our marketers come up with plenty of logos and pathos. Ethos, not so much. But it worked for Jack Welch...
There’s an important difference between the alien’s initial sales pitch and the problem of recruiting people to Less Wrong. The alien is a representative of an advanced civilization, offering a manual for uplifting the human race—so there’s a solution to widely advertising it that will only work if the manual does: simply distribute the manual to a few hundred people around the world who are highly motivated to do well in life. Once they’ve learned it, applied its contents, and become wildly successful CEOs of General Electric or whatever, some of them will (almost certainly) make it known that their success is due to their mastery of the contents of a book...
But the book doesn’t actually exist, we aren’t hot-shit enough to recruit through ethos (why not? could it be that we’re failing? could it be that we’re failing so badly that our startups try to write their own payroll software?), and our sales pitches are pretty bad. I noticed so many of our quality people leaving, and so much lack of interest in *actually winning*, that I stopped paying attention myself—I only saw this post because it was linked on Twitter.
Before asking what LW can offer to prospective joiners that can’t be offered by other movements, ask if it *has* anything like that. I don’t think it does, and I don’t think it’s in a position to get there.
Indeed.
The next things to ask, then, are:
Is it possible to construct a movement which accomplishes our goals, but doesn’t have the failings you describe?
Suppose I am an aspiring rationalist, and would like to do whatever I can to help bring about the existence, and ensure the success, of such a movement. What ought I to do?
(Note that the phrasing of #2—namely, the fact that it’s phrased as a question of individual action—is absolutely critical. It does no good whatsoever to ask what “we” should do. “We” cannot decide to do anything; individuals choose, and individuals act.)
P.S.: It might be necessary also to preface these two questions with a question #0: “What actually are our goals?” (The OP does not quite make it clear—which may or may not be intended.)
These are good questions.
0. Are “we” the sort of thing that can have goals? It looks to me like there are a lot of goals going around, and LW isn’t terribly likely to agree on One True Set of Goals, whether ultimate or proximate.
I think one of the neglected possible roles for LW is as a beacon—a (relatively) highly visible institution that draws in people like-minded enough that semirandom interactions are more likely to be productive than semirandom interactions in the ‘hub world’, and allows them to find people sufficiently like-minded that they can then go off and do their own thing, while maintaining a link to LW itself, if only to search it for potential new members of this own thing.
My impression of internet communities in general is that they tend to be like this, and I don’t see any reason to expect LW to be different. Take Newgrounds, another site formed explicitly around productive endeavors (which has the desirable (for my purposes here) property that I spent my middle school years on it): it spawned all sorts of informal friend groups and formal satellite forums, each with its own sort of productive endeavor it was interested in. There was an entire ecosystem of satellite forums (and AIM/MSN group chats, which sometimes spawned satellite forums), from prolific NG forum posters realizing they had enough clout to start their own forum so why not, to forums for people interested in operating within the mainstream tradition of American animation, to a vast proliferation of forums for ‘spammers’ who were interested in playing with NG itself as a medium, to forums for people who were interested in making one specific form of movie—wacky music videos, video game sprite cartoons, whatever. And any given user could be in multiple of these groups, depending on their interests—I was active on at least one forum in each of the categories I’ve listed.
(As an aside: I say ‘spammers’ because that’s what they were called, but later on I developed enough interest in the art world to realize that there’s really no difference between what we did and what they’re doing. (The ‘art game’ people would do well to recognize this—they’re just trolls, but trolling is a art, so what the hell.) There were also ‘anti-spam’ forums, but I brought some of them around.)
1. As for classical LW goals, the AI problem does seem to have benefited quite a bit by ethos arguments. I’m not sure if “our goals” is even the type of noun phrase that *can* have semantic content, but cultivating general quality seems like a fairly broad goal. A movement that wants to gain appeal in the ways I’ve outlined will want its members to be visibly successful at instrumental rationality, and be fine upstanding citizens and so on.
2. I don’t think I’m smarter than Ben Franklin, so my advice for now would be to just do what he did. At a higher level: study successful people with well-known biographies and see if there’s anything that can be abstracted out. I notice (because Athrelon pointed it out a while ago) that Ben Franklin, C.S. Lewis, Tolkien, Thiel, and Musk have one thing in common: the benefit of a secret society or something like it—the Junto, the Inklings, or the Paypal Mafia.
I think there is something like a Platonic “ultimate textbook of human rationality” that may be written in the future, but we don’t actually know its contents. That’s why the visitor can’t give us the book. We have a dual problem: not only the challenge of spreading the ideas, but actually pinning down what the ideas are in the first place.
Actually, I think “pinning down” has entirely the wrong connotations, because human rationality seems more like a living and breathing process rather than a list of maxims chiseled in stone, and to a degree culturally dependent.
I will say that I don’t think you need to answer #0 concretely before you set out. We can guess at the contents of the Platonic rationality textbook, and then iterate as we converge upon it.