On Leverage Research’s plan for an optimal world
The plan currently revolves around using Connection Theory, a new psychological theory, to design “beneficial contagious ideologies”, the spread of which will lead to the existence of “an enormous number of actively and stably benevolent people”, who will then “coordinate their activities”, seek power, and then use their power to eliminate scarcity, disease, harmful governments, global catastrophic threats, etc.
That is not how the world works. Most positions of power are already occupied by people who have common sense, good will, and a sense of responsibility—or they have those traits, to the extent that human frailty manages to preserve them, amidst the unpredictability of life. The idea that a magic new theory of psychology will unlock human potential and create a new political majority of model citizens is a secular messianism with nothing to back it up.
I suggest that the people behind Leverage Research need to decide whether they are in the business of solving problems, or in the business of solving meta-problems. The real problems of the world are hard problems, they overwhelm even highly capable people who devote their lives to making a difference. Handwaving about meta topics like psychology and methodology can’t be expected to offer more than marginal assistance in any specific concrete domain.
- Reflective Bayesianism by 6 Apr 2021 19:48 UTC; 62 points) (
- Utopian hope versus reality by 11 Jan 2012 12:55 UTC; 31 points) (
- 25 Oct 2021 6:11 UTC; 15 points) 's comment on Zoe Curzi’s Experience with Leverage Research by (
- 25 Apr 2012 2:16 UTC; 4 points) 's comment on The Craft And The Community: Wealth And Power And Tsuyoku Naritai by (
After looking at the plan and the previous post, I’ve realized:
You are a parody of the SIAI, and I claim my five pounds.
I was suspecting a self-parody.
If not, it’s a heaping serving of poeslaw.
Skeptic: The idea that a magic new theory of psychology will unlock human potential and create a new political majority of model citizens is a secular messianism with nothing to back it up.
Leverage Researcher: Have you done the necessary reading? Our ideas are based on years of disjunctive lines of reasoning (see blog post #343, 562 and 617 on why you are wrong).
Skeptic: But you have never studied psychology, why would I trust your reasoning on the topic?
Leverage Researcher: That is magical thinking about prestige. Prestige is not a good indicator of quality. We have written a bunch of blog posts about rationality and cognitive biases.
Skeptic: That’s great. But do you have any data that indicates that your ideas might actually be true?
Leverage Researcher: No. You’re entitled to arguments, but not (that particular) proof (blog post #898).
Skeptic: Okay. But I asked experts and they disagree with your arguments.
Leverage Researcher: You will soon learn that your smart friends and experts are not remotely close to the rationality standards of Leverage Research, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.
Skeptic: Ummm, okay. To refine my estimations regarding your theory of psychology, what do you anticipate to see if your ideas are right, is there any possibility to update on evidence?
Leverage Researcher: No, I don’t know enough about psychology to be more specific about my expectations. We will will know once we try it, please support us with money to do so.
Skeptic: I am not convinced.
Leverage Researcher: We call that motivated skepticism (see blog post #1355).
You would invoke this on someone asking for only specific evidence for your theory. It doesn’t make sense to invoke it against someone asking for ANY evidence.
You have to take the outside view here. When an outsider asks if you have evidence that AI will go FOOM then they are not talking about arguments because convincing arguments are not enough in the opinion of a lot of people. That doesn’t imply that it is wrong to act on arguments but that you are so far detached from the reality of how people think that you don’t even get how ridiculous it sounds to an outsider that has not read the sequences. Which your comment and the 11 upvotes it got obviously show.
The way outsiders see it is that a lot of things can sound very convincing and yet be completely wrong and that only empirical evidence or mathematical proofs can corroborate extraordinary predictions like those made by SI.
The wrong way to approach those people is with snide remarks about their lack of rationality.
Your reply makes me think that you interpreted the ‘you’ in “You would invoke …” as you—XiXiDu, so it sounded like Incorrect was accusing you of being hypocritical. I think they might have just meant ‘one’, though, which would make their reply less of a snide remark and more of an (attempted) helpful correction.
I’m guessing you didn’t read it that way because Incorrect was attempting to correct the way Leverage Researcher was using that argument, but you didn’t identify with the Leverage Researcher character in your dialogue. So when Incorrect posted that as a reply to you, you thought they were saying that you yourself are just as bad as your character. I’m guessing about what’s going on in two different people’s brains though, so I could easily be wrong.
And, in particular, you would invoke it when the proof demanded is proof that should not exist even given that the theory is correct.
The parent is misinformed trolling.
(Not) Leverage Researcher: Yes I have. What are you basing this pretentious diatribe on? You clearly know nothing about us and are pattern matching to the issues you rant on about incessantly with respect to SingInst.
(Not) Leverage Researcher: I haven’t seen you be convinced about anything substantial ever. “Motivated skepticism” would be a polite declaration—it assumes your ‘curiosity’ is not a rhetorical facade. (See blog post #1355 for the appropriate response to this kind of rubbish.)
I thought it was supposed to be funny. Curse you Poe’s Law!
It is supposed to be funny. Humor at the expense of Leverage Research (or, based on content, perhaps SIAI?). Humor is one of the most effective ways of undermining something’s credibility. Making jokes based on false premises is rather insidious. If you reject any blatant falsehoods you may be conveyed as unable to take a joke.
I saw that coming and I knew it would be you. People are either trolling or part of a dark arts conspiracy.
The comment I wrote is the way I perceived lesswrong when I first came here. And I can tell from conversations with other people that they share that opinion.
A lot of your comments are incredible arrogant and consist of dismissive grandstanding. And on request you rarely explain yourself but merely point out that you don’t have to do so.
I wrote the comment so that SI can improve their public relations.
I can’t tell whether you guys are metatrolling each other or what.
Can they come to an Aumann agreement on the matter?
TWO OPINIONS ENTER! ONE OPINION LEAVES!
Only perfectly rational people are guaranteed to be able to do that. And you know that they are not both rational once the accusations like “trolling” and “arrogance” start flying.
Sometimes an Aumann agreement just isn’t appropriate. This is one of those times.
Leverage Research is not SingInst. I have some reservations about their ideas, but they don’t overlap at all with the ones you’ve expressed here. Generalizing complaints you have against SingInst comes across as holding a grudge and misapplying it.
I don’t think that’s what happened. I think the aim was not to criticise LR at all, but to demonstrate to LW what LW’s answers to XiXiDu’s questions look like to him, when they’re portrayed as coming from somewhere else, somewhere LW is not particularly impressed by.
Really?
How confident were you that your comment would result in noticeable improvements to SI’s public relations?
People here have pretty much stopped replying to objections with “you should read the Sequences”. This suggests that pointing out socially clunky behaviour is worth at least trying, for all the outcries of the stung.
Mm. That’s fair.
Updated in favor of communication being a marginally less hopeless way of improving the world than I’d previously believed.
I am confident that people like Luke Muehlhauser will update on my comment and realize that you can’t approach outsiders the way it often happens on lesswrong. I voice this particular criticism for some time now and it got a lot better already.
Although people like wedrifid will probably never realize that it isn’t a good idea to link to lesswrong posts like they are the holy book of everyone who is sane and at the same time depict everyone who does disagree as either stupid, a troll or a master of dark arts.
Just check his latest comment, all he can do is attack people with a litany of charges like being logical rude or not able their change your mind.
On a first pass, the Leverage Research website feels like Objectivism. I say this because it is full of dubious claims about morality and psychology but which are presented as basic premises and facts. The explanations of “Connection Theory” are full of the same type of opaque reasoning and fiat statements about human nature which perhaps I am particularly sensitive to as a former Objectivist. Knowing nothing more than this first impression, I am going to make a prediction that there are Objectivist influences present here. That seems at least somewhat testable.
There are no Objectivist influences that I am aware of.
I didn’t notice any Objectivist influences looking through the high-level claims on the Leverage website, but their persuasive style does remind me quite a bit of Objectivism’s: lots of reasonable-sounding but not actually rigorous claims about human thinking, heavy reliance on inference, and a fairly grandiose tone in the final conclusions. I’d credit this not to direct influence but to convergent evolution. To Leverage’s credit, Connection Theory does come off as considerably less smug, and the reductionism isn’t as sketchy.
Now, none of this is a refutation—I haven’t gone deep enough into Leverage’s claims to say anything definitive about whether or not any of this stuff actually works. Plenty of stuff that I’d consider true reminds me of Objectivism’s claims, or of those of other equally pernicious ideologies. But it’s definitely enough to inform my priors, and it should shed light on some potential signaling problems in the presentation.
Maybe you are not aware of them?
Your denial would be more convincing if you compared and contrasted CT ideas and objectivist ideas.
Unfortunately, I’m not familiar with Ayn Rand’s ideas on psychology.
For a given value of ‘unfortunate’. :)
^Beat me to it.
Since Connection Theory is mostly Geoff Anders’ work, I would be very surprised if it could have big influences he wasn’t aware of (maybe if he delegated a lot of stuff to Objectivist students or something, or was heavily influenced by some Objectivist psychologist).
I’m not an expert on Objectivism, but one of Rand’s principles was to always pass moral judgement.
Connection theory has much less moral judgement to it than most approaches.
It’s conceivable that there’s a similar intellectual style of trying to understand the world by starting with abstractions, but that’s not necessarily a matter of direct influence.
Maybe you should add a note at the top of the comment explicitly stating that it is not really about Leverage and does not at all represent your views about them.
You are trolling Leverage because you have issues with SingInst? It just isn’t ok to slander an organization like that based, from what I can tell, on the fact that there are social affiliations between Leverage and another group you disapprove of.
I thought the point was that the comment showed how the arguments, which we’ve gotten used to and don’t fully question anymore, would look ridiculous when applied in a different context. (It was a pretty effective demonstration for me—the same responses did look far less convincing when they were put in the mouth of Leverage Research people rather than LW users..)
Exactly right.
Some remarks:
I don’t think the arguments LW/SI uses against its opponents are wrong but that reality is more complex than the recitation of a rationality mantra.
If you want to discuss or criticize people who are not aware of LW/SI then you should commit to an actual discussion rather than telling them that they haven’t read the sequences.
There is no reason for outsiders to suspect that LW/SI has any authority when it comes to arguments about AI, quantum physics or whatever.
If you want to convince outsiders then you should ask them questions and voice your own opinion. You should not tell them that you have it all figured out and that they just have to read those blog posts you wrote.
You should not portray yourself as the single bright shining hope for the redemption of the humanities collective intellect. That’s incredible arrogant and cultish.
You have to distill your subject matter and make it more palatable for the average person who really doesn’t care about being part of the Bayesian in-crowd.
Could you please stop such accusations, it’s becoming ridiculous. If you have nothing sensible to say then let the matter rest. Your main approach of gaining karma seems to be quantity rather than actual argumentation.
I was just making fun of the original post that described Leverage Research as “secular messianism”. At the same time I was pointing out something important about how some behavior here could be perceived.
You seem to be the actual troll here who hides behind the accusation of trolling.
The people being slandered here aren’t just strangers on the internet—they are people I know. If I see them being misrepresented then of course I am going to object. I spent a week taking classes from Geoff and he most certainly has studied (and researched) psychology. Yet his company is portrayed here in the role of uneducated. And then, by way of justification, you say:
I most certainly am going to make accusations about that because it just isn’t ok. You don’t go around misrepresenting the qualifications and credibility Leverage Research just because you have an issue with the Singularity Institute.
There’s only one way I was able to interpret XiXiDu’s top comment (the one you link to), and that was as a satire of responses to his many previous questions about SIAI. I can’t read it as a slander against Leverage at all. To me, this thread is roughly equivalent to attacking Jonathan Swift for his policy of baby-eating.
Now you are being hypocritical. The author of the original post was the one who was rude with respect to Leverag. But you have chosen to attack me instead, I suspect because you agree with the author of the original post but get all outrageous if someone does criticise your precious SI.
That’s basically a confession.
Or the result of having an accurate model of wedrifid.
When I read XiXiDu’s original comment, I also predicted wedrifid would respond negatively.
What’s wrong with rhetorical facades? I would even say they are one of my favorite things.
I don’t think that this is actually true. While I don’t know of Geoff’s specific plans to test CT, I do know that he’s interested in continuing to do so.
The author of the original post is skeptical about Leverage and I showed what would happen if Leverage was like lesswrong/SI. I am not criticizing Leverage.
...I’m not sure whether you’re making fun of Leverage Research or LessWrong in general here. Which is worrying.
To address the point behind the parody, the main difference between this and the analogous argument with “SIAI researcher”, besides user:Incorrect’s point and the fact that not being convinced is almost never automatically equated with motivated skepticism, is that the links to the blog posts don’t work. When they do, I don’t think the practice of linking to blog posts is problematic at all. It reduces the need to repeat arguments, and centralizes discussion of a particular issue to the comments of the corresponding post, instead of it being all over the place. Your dialog also gives the impression that you can find a post from the LW archives to support “anything”, that the linked post usually appears as incomprehensible and seemingly unrelated as a bunch of random digits, and that the act of giving someone a link to a relevant blog post is mainly a way to confuse and intimidate them with authority and the point is never for them to actually read it. Each of these impressions is false, as far as I can tell.
I’m of two minds about this sort of skepticism.
On the one mind, successfully addressing the “meta topics” related to the real hard overwhelming problems of the world seems a far better way of improving the world than devoting one’s life to addressing the object-level problems directly. “Give me six hours to chop down a tree and I will spend the first four sharpening the axe,” and all that.
On the other mind, the odds of any given attempt to address those “meta topics” being successful are punishingly low.
Back on the first mind, the odds of a successful attempt ever being made become much lower if nobody ever makes or supports attempts.
On the second mind, the expected opportunity costs associated with failed attempts might very well outweigh the expected value of a successful one.
Back on the first mind, those costs probably won’t outweigh those associated with, say, World of Warcraft.
Huh? That is not how the world works.
OK, let’s consider the opposite proposition: Most positions of power are occupied by people who are crazy, resentful, and irresponsible. It might sound cynically plausible. But I would see even those traits as mostly a reaction to the difficulties of power. The idea of smooth sociopaths who lie their way to power and riches via superior awareness of human gullibility is way overrated. In most cases, power is a reward for boring hard work, taking responsibility, and being effective. Unless you’re born to power, you only get to have it and to hold onto it by working with some group of people who have a complementary power of their own, the power to depose you—your investors, your voters, your professional colleagues.
The LR paradigm seems to be based on a counter-myth, the opposite of the manipulator who cruises to worldly success on the basis of pushing the right buttons in people’s minds. This counter-myth is the idea of the super-effective altruist who similarly owes their success to superlative psychological knowledge. Both myths underrate the psychological sophistication of the peers who are being manipulated (for good or bad) by the mythical figures, and both myths overrate how far you can get just with psychology.
That just sounds stupid to me. Being crazy and irresponsible are deleterious traits in those seeking power. Resentfulness is both something that the losers in the power game are more likely to feel and dwelling in resentfulness is something of a failure mode when it comes to practical power gaining. Don’t get mad, don’t get even, just take your next step to power.
These things complement each other. You need both if you are going to reach the higher echelons.
And here is where I rest my idealism and my optimism. I don’t try to force people—particularly powerful people—to be benevolent and responsible (for anything but their own success). I don’t force myself to believe that people are bastions of goodwill and paternal grace. I prefer to see systems and institutions set up such that plain old self interested hypocritical Machiavellian monkeys have payoff structures that ensure that their behavior benefits everyone else anyway. Any fundamental, non hypocritical and internally coherent goodwill beyond that is just a bonus.
Is your objection the assertion that authority figures have “common sense, good will, and a sense of responsibility” at all? Or do you assert that other values will often override them?
If the former, I suggest that Tywin Lannister seems like he does have those three qualities. It’s just that he also has a bunch of jerkwad values. (if you aren’t familiar with Song of Ice and Fire, I’ll come up with a different example). And I think he’s a reasonable representation of many actual authority figures.
I personally think that it’s certainly possible for an authority figure to have all these qualities. The probability of this happening is nonzero. However, it’s much more likely that the authority figure possesses ambition, ruthlessness, and a lust for power. Generally, one does not become an authority merely by being wise and nice to everyone.
I’d actually expect a substantial number of authority figures to carry both those sets of qualities: the OP’s make seeking and maintaining authority a lot more viable in a situation where anyone is even halfway good at judging honesty, and there are clear motivational reasons for yours. The only ones that could be said to interfere with each other are “good will” and “ruthlessness”, and I don’t think even those are fully incompatible.
Or was that the point you were trying to make?
I disagree with TimS’s reply. As I see it, people who seek power in the first place rarely do so for entirely selfless reasons. And even if they do seek power and somehow acquire it, they must still hang on to it, fending off assaults from the occasional competitor who cares about nothing but power for its own sake. In that kind of environment, only the most efficient optimizers survive.
You posit an environment where people are “even halfway good at judging honesty”, but I don’t know of any places on Earth where that is actually the case (though I do admit that they can exist).
TimS says that the best way to signal “common sense, good will, and a sense of responsibility” is to actually have those qualities, and maybe that’s true (unless your opponent is running attack ads, of course). But there’s a very high cost associated with having things like “good will” and “responsibility”. Signaling these virtues without actually having them is harder, but it’s probably worth it in the long run, as long as your goal is to acquire power and keep it.
It was the point I was trying to make, which seemed to be missing in the wedrifid/Porter discussion.
More generally, it’s quite hard to become influential (even in non-democratic societies) without signalling that you have “common sense, good will, and a sense of responsibility.” And the easiest way to signal that you have those virtues is to actually have them. Which isn’t to say that your bad qualities (ambition, ruthlessness, and a lust for power) don’t frequently outweigh them.
Focusing on one’s narrow interests (lust for power?) conflicts fairly strongly with my understanding of “sense of responsibility.” YMMV
I’ve picked up enough from popular culture to get the general picture. I haven’t read or watched the series—I tend to be biased towards stories with with a clear character I can identify with. It’s fantasy escapism—I don’t want all this sophisticated moral murkiness. :)
I humbly suggest reading the series; there are clear characters for a range of values. Gregor Clegane is a stand-out example. They spend much of their time chopping the heads off of the morally murky characters, too!
I’m sure I’ll get around to it eventually. I must admit though, even my dark side doesn’t go quite so far as to empathize strongly with drug addicted child killing rapists. Raping a mother while the blood and brains of her slaughtered child are still on his hands—that guy really does take things to extremes!
Yes. He has a very logical mindset, though. “My horse failed me --> decapitate my horse” is one such example.
The ‘use a mare in heat’ was a good idea. I wonder if that would actually work. If so I wonder if they ever made a rule about it. It wouldn’t at all surprise me if some medieval Tim Ferris gamed Jousting systems in ways like this and outraged enough nobles that they made this kind of trick a capital offense.
I seem to recall reading somewhere that the Crusaders ran into trouble with this. The Europeans favored stallions (stronger and more intimidating); the Saracens favored mares (faster and easier to control); the combination didn’t work out well for the Europeans.
Okay, I’m about 6 boxes into the flowchart of their plan and already releasing “gaaah” noises. I’ll add a few further updates, but I may not have the willpower to make it through the whole thing.
Okay, finished. Wasn’t as bad as I’d expected given the beginning.
Short summary: spend decades developing a particularly powerful way to understand people, and then use that to do as much good as possible, e.g. by working like a grant-awards agency that really understands who deserves the money, or like a think tank that really understands what messages people will remember, etc.
If you sort of squint your eyes, it makes sense. On the other hand, I won’t hold my breath. For example, their plan only works if they beat everyone else to this understanding by greater than the time it takes to recruit all the donors and prestige they’ll need (~5 years?). For another example, they’re sort of hamstrung by “Connection Theory” already.
If you write a longer comment or discussion post explaining what you found, e.g. how “Connection Theory” hamstrings them, I will upvote it.
Well, I don’t feel like it, but it might be fun to try and figure out why I’d say that from this summary.
Those linked basic claims look well falsified already.
Wishful thinking is not THAT ubiquitous and unbeatable. Lots of people expect to die without an afterlife and wish it wasn’t so.
Falsified all over the place, by most of the heuristics and biases literature for one, unless “that they can” is interpreted in a slippery fashion to describe whatever people in fact do.
This looks like it denies that people ever make real tradeoffs, but they do.
Got it in one.
Wait you are serious? I’m pretty sure a larger than normal fraction of people that rule us are sociopaths.
I’m happy people here realize this.
What makes you think that? I’d have guessed that a lot of the non-optimal decisions made by people in various positions of power are the result of normal human biases mixed with whatever incentives pertain to their situation.
I’ve seen some speculation about this, but even if it’s true, and the proportion is larger by an order of magnitude than in the background population, it’s still a lot less than “most”.
I don’t think sociopathy or malice is really a good explanation of what goes wrong, when it does go wrong, among people of power.
Leverage Research seems, at first glance at least, to be similar to SIAI. Their plans have similar shapes:
1). Grow the organization (donations welcome).
2). Use the now-grown organization to grow even faster.
3). ???
4). Profit ! Which is to say, solve all the world’s problems / avoid global catastrophe / usher in a new age of peace and understanding / etc.
I think they need to be a bit more specific there in step 3.
We’ve tried to fill in step 3 quite a bit. Check out the plan and also our backup plan. We’re definitely open to suggestions for ways to improve, especially places where the connection between the steps is the most tenuous.
I’d actually read your plan before posting my comment (though not the backup one). I found it very hard to follow and somewhat nebulous, but maybe it’s just me. There, and on the backup plan, you say things like “Study field X and extract all useful information”, which is a statement that I’m finding very difficult to call anything other than “hubris”.
In addition, the sheer complexity of your flowchart is daunting, and I question its utility. Shouldn’t you at least find out whether your Connection Theory works at all, and if so, whether it has practical applications, before drawing boxes about things like “design optimal societies” ?
Really? I’m sure powerful people have common sense, and I’m sure they have the ordinary good will towards their friends and responsibility for their allies. But I doubt they have the extraordinary good will and responsibility needed to do the right thing. Maybe you believe that such qualities are unrealistic because of “human frailty”.
I think your second criticism is solid:
Well, yeah, but the same could be said of Leverage Research.
What are you referring to?
I believe that (1) many powerful people do wicked and irresponsible things, and many more do wicked and irresponsible things by failing to act; and (2) they do so not by ignorance; and (3) if they are to do unwicked and responsible things, they would need a certain sense of right and wrong, and a certain ability to act on that sense. I can be more specific if you like.
Now that I write this, it occurs to me that although it seems like most powerful people do wicked things, there might be a selection effect going on, and I might be surprised by the proportion of powerful people doing good.
Well, Jesus (if he ever lived) or someone who was publishing under that pseudonym, tried to make a beneficial contagious ideology. Immediately thereafter, this ideology was adjusted to be more contagious at expense of not being beneficial any more. The new version immediately out-competed old without much struggle, and after it spread you got crusades, witchhunts, and the like.
This kind of stuff simply doesn’t work. The contagious memes are subject to redesign. There is a very recent example for you—Godwin’s law. I seen Godwin himself try to compare someone to nazis, quite validly in my opinion, to be struck down with reference to ‘his law’, which came to be merely a tool for denial of historical lessons of holocaust—quite contrarily to the alleged original intent.
There’s two kinds of ideas. Ones that are rationally spread out of some form of self interest, and ones that are spread irrationally because they exploit some deficiency in the thought process. Creating more of the latter wont help anyone.
I think you just invented the anti-Godwin. “Know who else tried to make the world a better place? Jesus.
Lol, I’m soo using this exact phrase.
I think this is legitimate discussion, but unclear why it needed its own thread.
This is way too harsh. So they don’t have a great plan. They have amazing goals and have said that Connection Theory (despite being all over the site) is not completely necessary to accomplish their goals. Why bash them here? Why not bash the millions of groups that have both a bad plan and bad goals?
Why add the word “magic” or mention “messianism”? Smells unfair to me.
Connection Theory is not the problem. It’s the whole idea of saving the world with a psychological theory. That’s what tipped my assessment from positive to negative.
Have you looked at the plan? Magic and messianic are not exaggerations.
I think everybody is getting hung up about connection theory which is not the only thing that Leverage Research does. I’m not completely sure, but I’m pretty sure it’s not even the main thing they do. EDIT: Why is this tagged politics? Does it have to do with the mind-killing comment thread about meta-trolling?
“The actively and stably benevolent people successfully seek enough power to be able to stably guide the world.” “Give everyone the power to achieve their ultimate goals as far as possible without harming others.” Those are political intentions and political acts.
Also, every country on Earth already has a class of people nobly bearing the burden of being in charge of everything. You cannot set out to deliberately and comprehensively change the world without encroaching upon this existing political sphere, even if you imagine The Change happening via leaderless self-organizing harmony. Either you intend to be on the throne, advising the throne, or making the throne irrelevant, and in each case, the current occupant of the throne will take a keen interest, if by some miracle you do start to make a difference.
Connection Theory is not the main thing that we do. It’s one of seven main projects. I would estimate that about 15% of our current effort goes directly into CT right now. It’s true that having a superior understanding of the human mind is an important part of our plan, and it’s true that CT is the main theory we’re currently looking at. So that is one reason people are focusing on it. But it’s also one of the better-developed parts of our website right now. So that’s probably another reason.
The basic proposition seems reasonable enough to me, though it might offend the sensibilities of those taken by the correspondence theory of truth.
People spend a lot of time trying to improve the world by imparting The Truth to their neighbors. There might be some mileage spreading ideas for their predicted effect, and not for their Correspondence Truth value. Spread ideas to win, not to convert. Dennett hints at this with his domesticating religion program.
Ultimately, the truth of Connection Theory depends on one thing: whether Yudkowsky supports it. Let us all patiently wait for his judgement before engaging in infighting lest we accidentally end up on the side of the traditional rationalists.
This is actually pretty funny and sort of true.