Speaking personally, based on various friendships with people within Leverage, attending a Leverage-hosted neuroscience reading group for a few months, and having attended a Paradigm Academy weekend workshop.
I think Leverage 1.0 was a genuine good-faith attempt at solving various difficult coordination problems. I can’t say they succeeded or failed; Leverage didn’t obviously hit it out of the park, but I feel they were at least wrong in interesting, generative ways that were uncorrelated with the standard and more ‘boring’ ways most institutions are wrong. Lots of stories I heard sounded weird to me, but most interesting organizations are weird and have fairly strict IP protocols so I mostly withhold judgment.
The stories my friends shared did show a large focus on methodological experimentation, which has benefits and drawbacks. Echoing some of the points, I do think when experiments are done on people, and they fail, there can be a real human cost. I suspect some people did have substantially negative experiences from this. There’s probably also a very large set of experiments where the result was something like, “I don’t know if it was good, or if was bad, but something feels different.”
There’s quite a lot about Leverage that I don’t know and can’t speak to, for example the internal social dynamics.
One item that my Leverage friends were proud to share is that Leverage organized the [edit: precursor to the] first EA Global conference. I was overall favorably impressed by the content in the weekend workshop I did, and I had the sense that to some degree Leverage 1.0 gets a bad rap simply because they didn’t figure out how to hang onto credit for the good things they did do for the community (organizing EAG, inventing and spreading various rationality techniques, making key introductions). That said I didn’t like the lack of public output.
I’ve been glad to see Leverage 2.0 pivot to progress studies, as it seems to align more closely with Leverage 1.0’s core strength of methodological experimentation, while avoiding the pitfalls of radical self-experimentation.
Would the world have been better if Leverage 1.0 hadn’t existed? My personal answer is a strong no. I’m glad it existed and was unapologetically weird and ambitious in the way it was and I give its leadership serious points for trying to build something new.
My understanding is that Geoff Anders and Andrew Critch each independently invented goal factoring, and had even been using the same diagramming software to do it! (I’m not sure which one of them first brought it to CFAR.)
Geoff Anders was the first one to teach it at CFAR workshops, I think in 2013. This is the first time I’ve heard claims of independent invention, at the time all the CFAR people who mentioned it were synced on the story that Anders was a guest instructor teaching a technique that Leverage had developed. (Andrew Critch worked at CFAR at the time. I don’t specifically remember whether or not I heard anything about goal factoring from him.)
Anna & Val taught goal factoring at the first CFAR workshop (May 2012). I’m not sure if they used the term “goal factoring” at the workshop (the title on the schedule was “Microeconomics 1: How to have goals”), but that’s what they were calling it before the workshop including in passing on LW. Geoff attended the third CFAR workshop as a participant and first taught goal factoring at the fourth workshop (November 2012), which was also the first time the class was called “Goal Factoring”. Geoff was working on similar stuff before 2012, but I don’t know enough of the pre-2012 history to know if there was earlier cross-pollination between Geoff & CFAR folks.
When I learned it from Geoff in 2011, they were recommending yEd Graph Editor. The process is to generally write things you do or want to do as nodes, and then connect them to each other using “achieves or helps to achieve” edges (i.e., if you go to work, that achieves making money, which achieves other things you want).
Speaking personally, based on various friendships with people within Leverage, attending a Leverage-hosted neuroscience reading group for a few months, and having attended a Paradigm Academy weekend workshop.
I think Leverage 1.0 was a genuine good-faith attempt at solving various difficult coordination problems. I can’t say they succeeded or failed; Leverage didn’t obviously hit it out of the park, but I feel they were at least wrong in interesting, generative ways that were uncorrelated with the standard and more ‘boring’ ways most institutions are wrong. Lots of stories I heard sounded weird to me, but most interesting organizations are weird and have fairly strict IP protocols so I mostly withhold judgment.
The stories my friends shared did show a large focus on methodological experimentation, which has benefits and drawbacks. Echoing some of the points, I do think when experiments are done on people, and they fail, there can be a real human cost. I suspect some people did have substantially negative experiences from this. There’s probably also a very large set of experiments where the result was something like, “I don’t know if it was good, or if was bad, but something feels different.”
There’s quite a lot about Leverage that I don’t know and can’t speak to, for example the internal social dynamics.
One item that my Leverage friends were proud to share is that Leverage organized the [edit: precursor to the] first EA Global conference. I was overall favorably impressed by the content in the weekend workshop I did, and I had the sense that to some degree Leverage 1.0 gets a bad rap simply because they didn’t figure out how to hang onto credit for the good things they did do for the community (organizing EAG, inventing and spreading various rationality techniques, making key introductions). That said I didn’t like the lack of public output.
I’ve been glad to see Leverage 2.0 pivot to progress studies, as it seems to align more closely with Leverage 1.0’s core strength of methodological experimentation, while avoiding the pitfalls of radical self-experimentation.
Would the world have been better if Leverage 1.0 hadn’t existed? My personal answer is a strong no. I’m glad it existed and was unapologetically weird and ambitious in the way it was and I give its leadership serious points for trying to build something new.
Besides belief reporting, which rationality techniques did they invent and spread intot he community where they should get credit?
Goal factoring is another that comes to mind, but people who worked at CFAR or Leverage would know the ins and outs of the list better than I.
My understanding is that Geoff Anders and Andrew Critch each independently invented goal factoring, and had even been using the same diagramming software to do it! (I’m not sure which one of them first brought it to CFAR.)
Geoff Anders was the first one to teach it at CFAR workshops, I think in 2013. This is the first time I’ve heard claims of independent invention, at the time all the CFAR people who mentioned it were synced on the story that Anders was a guest instructor teaching a technique that Leverage had developed. (Andrew Critch worked at CFAR at the time. I don’t specifically remember whether or not I heard anything about goal factoring from him.)
Anna & Val taught goal factoring at the first CFAR workshop (May 2012). I’m not sure if they used the term “goal factoring” at the workshop (the title on the schedule was “Microeconomics 1: How to have goals”), but that’s what they were calling it before the workshop including in passing on LW. Geoff attended the third CFAR workshop as a participant and first taught goal factoring at the fourth workshop (November 2012), which was also the first time the class was called “Goal Factoring”. Geoff was working on similar stuff before 2012, but I don’t know enough of the pre-2012 history to know if there was earlier cross-pollination between Geoff & CFAR folks.
Critch developed aversion factoring.
In this video from March 2014 https://www.youtube.com/watch?v=k255UjGEO_c Andrew Critch says he developed “Aversion factoring”.
I believe this. Aversion factoring is a separate insight from goal factoring.
Do you have a link to more info on how they do goal factoring/what software they were using?
When I learned it from Geoff in 2011, they were recommending yEd Graph Editor. The process is to generally write things you do or want to do as nodes, and then connect them to each other using “achieves or helps to achieve” edges (i.e., if you go to work, that achieves making money, which achieves other things you want).
When was the precursor to the first EAG? Before 2015?