Introducing Leverage Research
Geoff Anders asked me to post this introduction to Leverage Research. Several friends of the Singularity Institute are now with Leverage Research, and we have overlapping goals.
Hello Less Wrong! I’m Geoff Anders, founder of Leverage Research. Many Less Wrong readers are already familiar with Leverage. But many are not, and because of our ties to the Less Wrong community and our deep interest in rationality, I thought it would be good to formally introduce ourselves.
I founded Leverage at the beginning of 2011. At that time we had six members. Now we have a team of more than twenty. Over half of our people come from the Less Wrong / Singularity Institute community. One of our members is Jasen Murray, the leader of the Singularity Institute’s recent Rationality Boot Camp. Another is Justin Shovelain, a two-year Visiting Fellow at SIAI and the former leader of their intelligence amplification research. A third is Adam Widmer, a former co-organizer of the New York Less Wrong group.
Our goal at Leverage is to make the world a much better place, using the most effective means we can. So far, our conclusion has been that the most effective way to change the world is by means of high-value projects, projects that will have extremely positive effects if they succeed and that have at least a fair probability of success.
One of our projects is existential risk reduction. We have conducted a study of the efficacy of methods for persuading people to take the risks of artificial general intelligence (AGI) seriously. We have begun a detailed analysis of AGI catastrophe scenarios. We are working with risk analysts inside and outside of academia. Ultimately, we intend to achieve a comprehensive understanding of AGI and other global risks, develop response plans, and then enact those plans.
A second project is intelligence amplification. We have reviewed the existing research and analyzed current approaches. We then created an initial list of research priorities, ranking techniques by likelihood of success, likely size of effect, safety, cost and so on. We plan to start testing novel techniques soon.
These are just two of our projects. We have several others, including the development of rationality training program, the construction and testing of theories of the human mind and an investigation of the laws of idea propagation.
Changing the world is a complex task. Thus we have a plan that guides our efforts. We know that to succeed, we need to become better than we are. So we take training and self-improvement very seriously. Finally, we know that to succeed, we need more talented people. If you want to significantly improve the world, are serious about self-improvement and believe that changing the world means we need to work together, contact us. We’re looking for people who are interested in our current projects or who have ideas of their own.
We’ve been around for just over a year. In that time we’ve gotten many of our projects underway. We doubled once in our first six months and again in our second six months. And we have just set up our first physical location, in New York City.
If you want to learn more, visit our website. If you want to get involved, want to send a word of encouragement, or if you have suggestions for how we can improve, write to us.
With hope for the future,
Geoff Anders, on behalf of the Leverage Team
- Leverage Research: reviewing the basic facts by 3 Aug 2018 5:19 UTC; 103 points) (EA Forum;
- Research Deprioritizing External Communication by 6 Oct 2022 12:20 UTC; 89 points) (EA Forum;
- Research Deprioritizing External Communication by 6 Oct 2022 12:20 UTC; 34 points) (
- Utopian hope versus reality by 11 Jan 2012 12:55 UTC; 31 points) (
- 1 Aug 2014 13:03 UTC; 23 points) 's comment on Connection Theory Has Less Than No Evidence by (
- 1 Aug 2014 14:39 UTC; 13 points) 's comment on Connection Theory Has Less Than No Evidence by (
- 22 Jan 2012 5:49 UTC; 5 points) 's comment on New x-risk organizations by (
- 25 Apr 2012 2:16 UTC; 4 points) 's comment on The Craft And The Community: Wealth And Power And Tsuyoku Naritai by (
Geoff,
Of course you and I are pursuing many of the same goals and we have come to many shared conclusions, though our methodologies seem quite different to me, and our models of the human mind are quite different. I take myself to be an epistemic Bayesian and (last I heard) you take yourself to be an epistemic Cartesian. You say things like “Philosophically, there is no known connection between simplicity… and truth,” while I take Occam’s razor (aka Solomonoff’s lightsaber) very seriously. My model of the human mind ignores philosophy almost completely and is instead grounded in the hundreds of messy details from current neuroscience and psychology, while your work on Connection Theory cites almost no cognitive science and instead appears to be motivated by folk psychology, philosophical considerations, and personal anecdote. I place a pretty high probability on physicalism being true (taking “physicalism” to include radical platonism), but you say here that “it follows [from physicalism] that Connection Theory, as stated, is false,” but that some variations of CT may still be correct.
Why bring this up? I suspect many LWers are excited (like me) to see another organization working on (among other things) x-risk reduction and rationality training, especially one packed with LW members. But I also suspect many LWers (like me) have many concerns about your research methodology and about connection theory. I think this would be a good place for you to not just introduce yourself (and Leverage Research) but also to address some likely concerns your potential supporters may have (like I did for SI here and here).
For example:
Is my first paragraph above accurate? Which corrections, qualifications, and additions would you like to make?
How important is Connection Theory to what Leverage does?
How similar are your own research assumptions and methodology to those of other Leverage researchers?
I suspect it will be more beneficial to your organization to address such concerns directly and not let them lurk unanswered for long periods of time. That is one lesson I take from my recent experiences with the Singularity Institute.
BTW, I appreciate how many public-facing documents Leverage produces to explain its ideas to others. Please keep that up.
Hi Luke,
I’m happy to talk about these things.
First, in answer to your third question, Leverage is methodologically pluralistic. Different members of Leverage have different views on scientific methodology and philosophical methodology. We have ongoing discussions about these things. My guess is that probably two or three of our more than twenty members share my views on scientific and philosophical methodology.
If there’s anything methodological we tend to agree on, it’s a process. Writing drafts, getting feedback, paying close attention to detail, being systematic, putting in many, many hours of effort. When you imagine Leverage, don’t imagine a bunch of people thinking with a single mind. Imagine a large number of interacting parallel processes, aimed at a single goal.
Now, I’m happy to discuss my personal views on method. In a nutshell: my philosophical method is essentially Cartesian; in science, I judge theories on the basis of elegance and fit with the evidence. (“Elegance”, in my lingo, is like Occam’s razor, so in practice you and I actually both take Occam’s razor seriously.) My views aren’t the views of Leverage, though, so I’m not sure I should try to give an extended defense here. I’m going to write up some philosophical material for a blog soon, though, so people who are interested in my personal views should check that out.
As for Connection Theory, I could say a bit about where it came from. But the important thing here is why I use it. The primary reason I use CT is because I’ve used it to predict a number of antecedently unlikely phenomena, and the predictions appear to have come true at a very high rate. Of course, I recognize that I might have made some errors somewhere in collecting or assessing the evidence. This is one reason I’m continuing to test CT.
Just as with methodology, people in Leverage have different views on CT. Some people believe it is true. (Not me, actually. I believe it is false; my concern is with how useful it is.) Others believe it is useful in particular contexts. Some think it’s worth investigating, others think it’s unlikely to be useful and not worth examining. A person who thought CT was not useful and who wanted to change the world by figuring out how the mind really works would be welcome at Leverage.
So, in sum, there are many views at Leverage on methodology and CT. We discuss these topics, but no one insists on any particular view and we’re all happy to work together.
I’m glad you like that we’re producing public-facing documents. Actually, we’re going to be posting a lot more stuff in the relatively near future.
::follows various links::
Is CT falsifiable? There’s no obvious way to determine a person’s intrinsic goods except by observing their behavior, but a person’s behavior is what CT is supposed to predict in the first place. If a person appears to be acting in a way that contradicts the Action Rule, then “CT is wrong” and “CT is fine; the person had different intrinsic goods than I thought they did” are both consistent with the evidence.
Short answer: Yes, CT is falsifiable. Here’s how to see this. Take a look at the example CT chart. By following the procedures stated in the Theory and Practice document, you can produce and check a CT chart like the example chart. Once you’ve checked the chart, you can make predictions using CT and the CT chart. From the example chart, for instance, we can see that the person sometimes plays video games and tries to improve and sometimes plays video games while not trying to improve. From the chart and CT, we can predict: “If the person comes to believe that he stably has the ability to be cool, as he conceives of coolness, then he will stop playing video games while not trying to improve.” We would measure belief here primarily by the person’s belief reports. So we have a concrete procedure that yields specific predictions. In this case, if the person followed various recommendations designed to increase his ability to be cool, ended up reporting that he stably had the ability to be cool, but still reported playing video games while not trying to improve, CT would be falsified.
Longer answer: In practice, almost any specific theory can be rendered consistent with the data by adding epicycles, positing hidden entities, and so forth. Instead of falsifying most theories, then, what happens is this: You encounter some recalcitrant data. You add some epicycles to your theory. You encounter more recalcitrant data. You posit some hidden entities. Eventually, though, the global theory that includes your theory becomes less elegant than the global theory that rejects your theory. So, you switch to the global theory that rejects your theory and you discard your specific theory. In practice with CT, so far we haven’t had to add many epicycles or posit many hidden entities. In particular, we haven’t had the experience of having to frequently change what we think a person’s intrinsic goods are. If we found that we kept having to revise our views about a person’s intrinsic goods (especially if the old posited intrinsic goods were not instrumentally useful for achieving the new posited intrinsic goods), this would be a serious warning sign.
Speaking more generally, we’re following particular procedures, as described in the CT Theory and Practice document. We expect to achieve particular results. If in a relatively short time frame we find that we can’t, that will provide evidence against the claim “CT is useful for achieving result X”. For example, I’ve been able to work for more than 13 hours a day, with only occasional days off, for more than two years. I attribute this to CT and I expect we’ll be able to replicate this. If we end up not being able to, that’ll be obvious to us and everyone else.
Thanks for raising the issue of falsifiability. I’m going to add it to our CT FAQ.
It’s not an infrequent occurrence that someone comes up with a self-help technique that works for himself, but then doesn’t work nearly as well for others—but then if he’s say selling a book he may still be able to find 5 people out of 500 on which it works to add their testimony on the back cover!
So far, I see no reason to think that CT would be any better (either for prediction or self-improvement) than say Neuro-Linguistic Programming, which is also an alternative theory that claims impressive results and has a pretty big following.
I think it is possible some alternative psychology model will help people improve themselves and make humanity better etc. - but there are many candidates (including tings like Scientology), and much potential for self-delusion or misunderstanding or death spirals …
What I’ve heard is that, when they try to do studies, all types of therapy seem to be about as good as every other type; one study found that talking to a teenage girl with no training in particular was about as effective as talking with a professional therapist. (On the other hand, they also all tend to be better than nothing.)
(Note that this is all vague “what I remember hearing” stuff, so there’s probably something more definitive to be found if you Google it.)
You’re likely thinking of Dawes, at least in part. Obligatory Less Wrong link.
Thanks.
Any updates?
Do you have html for those documents? PDF is OK for me, but my guess is html is more openly accessible.
Seconded, I find pdf annoying, especially on my home computer where they don’t open in a browser tab, but in a separate application. I don’t see any benefit at all to pdf, except for stuff that needs to be printed out so you can write on it or something.
.
But what quality of work? Organizing my closet is very different than reading a dense academic paper with full concentration.
I can usually do any type of work. Sometimes it becomes harder for me to write detailed documents in the last couple hours of my day.
Oops, I forgot to answer your question about how central Connection Theory is to what we’re doing.
The answer is that CT is one part of what some of us believe is our best current answer to the question of how the human mind works. I say “one part” because CT does not cover emotions. In all contexts pertaining to emotions, everyone uses something other than CT. I say “some of us” because not everyone in Leverage uses CT. And I say “best current answer” because all of us are happy to throw CT away if we come up with something better.
In terms of our projects, some people use CT and others don’t. Some parts of some training programs are designed with CT in mind; other parts aren’t. In some contexts, it is very hard to do anything at all without relying on some background psychological framework. In those contexts, some people rely on CT and others don’t.
In terms of our overall plan, CT is potentially extremely useful. That said, CT itself is inessential. If it ends up breaking, we can find new psychological tools. And we actually have a backup plan in case we ultimately can’t figure out much at all about how the mind works.
Geoff,
Thanks for your clarifications! Especially: “I believe (Connection Theory) is false; my concern is with how useful it is.” That sentence sounds very different than the opening paragraph of your Connection Theory page; you may want to tweak the wording on that page.
I do believe Peirce is either rolling over in his grave, or doing whatever the opposite of that is.
Rolling over in his grave in the other direction?
Is this what is meant by connection theory?
No, this is Connection Theory. It is not a mainstream theory of psychology; it is Geoff’s own theory of psychology.
See also: Trimtab.
I had never heard of that before but it is interesting on a bunch of levels (mechanical, sociological, memetic, etc). My presumption is that you’re interested primarily in the idea captured by the this quote by Buckminster Fuller at the bottom of the wikipedia page:
I’m curious how deep the analogy you’re suggesting is. Can you extend it into something more explicit? My naive thought, would be that Eliezer_2007-ish was the trim tab and all of what’s happening from ~2009 to ~2013 (Leverage Research included) is more like “the rudder” starting to move, and the ship won’t even have visibly changed course until 2020-ish and then only slightly. The place the analogy really seems to fail to me is that it presumes there is this single tiny thing that matters (which is quite complimentary and thus a nice PR angle), when really there are probably thousands of things that will retrospectively be seen to have mattered and the english speaking singularitarian political movement is just one of them.
EDA: I don’t understand LW’s voting here. Tim was the one with the idea, I just spelled out the implications for the sake of discussion, and his comment’s at 1 and mine is 9 now?!? He’s the one with the awesome signal/noise ratio and relevant links, not me, but I can’t vote myself down to rectify this.
Well there may be some cases where a little effort can make a big difference—and it may pay for individuals to seek them out. However, there’s obviously a big influence from technological determinism—which would tend to damp out small fluctuations due to the efforts of individuals.
Do you know of any solid methodologies for predicting outcomes from technology? To cash out political determinism I’d go with something like Bruce Bruno De Mesquita’s work, but I don’t know any methods to analyze technological determination of history other than using case-by-case reasoning, and nearly all of the “cases” I’ve seen are post hoc.
Nobody has a practical methodology for predicting the future in very much detail.
Technological determinism still seems like a big and important idea to me, though.
EDA: I don’t understand LW’s voting here. Tim was the one with the idea, I just spelled out the implications for the sake of discussion, and his comment’s at 1 and mine is 9 now?!? He’s the one with the awesome signal/noise ratio and relevant links, not me, but I can’t vote myself down to rectify this.
link
Reminds me of a footnote in Kripke,
“(1) This outline was prepared hastily—at the editor’s insistence—from a taped manuscript of a lecture. Since I was not even given the opportunity to revise the first draft before publication, I cannot be held responsible for any lacunae in the (published version of the) argument, or for any fallacious or garbled inferences resulting from faulty preparation of the typescript. Also, the argument now seems to me to have problems which I did not know when I wrote it, but which I can’t discuss here, and which are completely unrelated to any criticisms that have appeared in the literature (or that I have seen in manuscript); all such criticisms misconstrue my argument. It will be noted that the present version of the argument seems to presuppose the (intuitionistically unacceptable) law of double negation. But the argument can easily be reformulated in a way that avoids employing such an inference rule. I hope to expand on these matters further in a separate monograph. ”
Unfortunately, the Kripke footnote appears to be a joke only.
Nope, it’s near the beginning of Naming and Necessity. I got the copy-paste from the internet, but first came accross it while writing an essay on definite descriptions.
There are a couple of similar-sounding footnotes in the preface and the first chapter, but I’m unable to find this particular one.
Ahhh, I may have mis-remembered. I’m away from the faculty library at the moment so can’t easily check.
I’ve got a copy right here and (1) I can’t find that footnote, or anything close enough that it might be a benignly garbled copy, in it (either in the main text or in footnotes) but (2) there’s plenty very near the start that’s like it in tone and that the footnote might well be a parody of. For instance, here’s some material from near the start of the preface, some phrases of which you will recognize:
To which he adds a footnote:
This part of Connection Theory seemed very interesting to me; I will say it in my own words, because I forgot the original words:
Seems to me that it relates to the idea of leaving a line of retreat. The way to cross a possible valley of bad rationality in yourself or in other person is to build new knowledge in such sequence that it does not endanger your values in the process. Sometimes the web of irrational beliefs may be very difficult to disentangle.
Checked the plan...
Lots of burdensome details...