This is a link to the latest Bayesian Conspiracy episode. Oliver tells us how Less Wrong instantiated itself into physical reality via LightHaven, along with a bit of deep lore of foundational Rationalist/EA orgs. He also gives a surprisingly nuanced (IMO) view on Leverage!
Do you like transcripts? We got one of those at the link as well. It’s an mid AI-generated transcript, but the alternative is none. :)
Very bad transcript
Welcome to the Basin Conspiracy. I’m Ineash Brodsky. I’m Steven Zuberger. And we have a guest with us today. Please say hello, Oliver. Hello, I’m Oliver. Oliver, welcome back to the podcast. Last time you were here, I don’t remember how long ago it was. We were talking about LessWrong 2.0. Do you remember how long ago that was?
0:20
It must have been two years and three months, something like that. Maybe a year and three quarters of a month. Do you want to take another guess? Maybe a year and a half?
0:31
It was almost four years to the day.
0:34
Four years. Oh, there are time skips in my life. And so I was in the wrong time skip.
0:40
It was before COVID. That’s right. That feels like a thousand years ago.
0:43
That’s right. I knew that it wasn’t during the middle of the pandemic.
0:47
Isn’t it crazy? There’s like a year and a half just carved out of all of our lives. Like some serious Avengers bullshit. yeah yeah well oliver welcome back we are talking at least partially about less wrong here today but primarily about lighthaven that’s right all light kind of
1:02
infrastructure yes excellent well i guess that brings me to my first question then what the heck is light cone
1:08
Yeah. I mean, basically, you know, I tried to revive LessWrong in 2017. I think it was reasonably successful, and now we have LessWrong 2.0. Pretty happy with it. I think kind of the core thing that happened is that we, you know, at the core of it was always how do we create intellectual progress on the issues
1:24
that we care most about, where the core of it was always the art of rationality, like how do we develop good methods of thinking, and how do we, you know, deal with artificial intelligence, both as a philosophical exercise as something to look at the world to understand what how minds work and how to improve our own minds,
1:39
but also as something with large societal effects, existential risk, various things like this. And so we did many user interviews kind of like every year with many of the core contributors that we cared about on LessWrong. And sometime in 2020, 2019, 2020, 2021, it became clear in our user interviews that the problems that they had about not
2:01
being as good researchers, producing as interesting ideas, helping with the things that we care about, were really things that could not be solved by adding additional web forms and fancy buttons to a website. They ended up being things like, well, I really want to find better co-founders.
2:16
I really want to have a place where I can really grok ideas or engage with ideas deeply. And it kind of became clear that if we wanted to not, in some sense, give up ownership and responsibility over the core mission that we had, which was improving intellectual progress on these core issues,
2:34
and we kind of needed to expand beyond just the website. And so we thought quite a while about, like, what does that mean? Like, if we want to take a broader scale of responsibility, where are the bottlenecks? What are the problems? And some of this happened during the pandemic. And kind of the pandemic both highlighted for us,
2:48
like, how crucial all the in-person infrastructure was. And also demonstrated some very clear opportunities where there was a lot of opportunity to get people to move to new places and to create new physical infrastructure. Because especially here in the Bay Area, many, many people moved away during COVID.
3:06
Because why would you pay $3,500 a month for your studio apartment if you can’t go anywhere and there’s no reason to be in the Bay Area? Yeah. As the pandemic was ending, we saw this huge opportunity. We could start investing in building in-person infrastructure in the Bay Area kind of with a blank slate,
3:26
which is like much more opportunity to think about how to integrate that really kind of into a coherent whole. And so the first thing that we did was we ran a retreat kind of right after COVID at the earliest opportunity where we could kind of run things called the Sanity and Survival Summit. And it went very well.
3:41
What is the Sanity and Survival Summit? Like, who did you invite? What did you talk about? So it was about 80 people. I thought really hard about the format. The format ended up kind of very interesting. You know, it was a summit. There were no talks. The mechanism by which you could put sessions on a schedule,
3:56
kind of it was a bit unconference-ish, but like most of the sessions were planned quite a while in advance. But in order to put something on the agenda, you needed to write at least a two-page memo. We were very inspired. Around the same time,
4:09
I read Working Backwards by one of the ex-Amazon executives who described a lot of the way Amazon works internally. It’s one of the world’s most successful companies. And they have this very interesting memo culture. crucial org decisions tend to get made by trying to create these memos. They have these often it’s a PR FAQ,
4:28
like a press release where you like start with, if we wanted to work on this project, pursue this initiative, what would the press release look like? Followed by an FAQ. There’s a kind of a specific memo format and they have all of this law and all of the structure to their memos.
4:42
but one of the most interesting things is that if you’re in one of these meetings at amazon the way they start is that there’s a memo that somebody has prepared for the meeting and for the first 15 to 20 minutes of a meeting that is structured
4:52
around a memo it is complete silence and everyone reads the memo right there Nobody’s expected to read the memo in advance. It is reasonable for like the collaborators or whatever to like, you know, have skimmed a memo and maybe you looked over it a bit.
5:04
But the general expectation is that you read the memo and engage with it for 15 to 20 minutes during the time you write down notes of what your biggest concerns, what your questions. And then the remaining session is spent asking questions usually with the person who wrote a memo,
5:16
digging into details and then coming to some kind of decision. And we adopted something very similar kind of in many of our internal meetings and in our internal structure. and then we decided to do that to a conference and so we kind of at a conference
5:28
you would need to prepare like prepare at least a two-page memo each session would start with in silence for at least 10 minutes everyone would read the memo and write their thoughts down on a memo and then a q a with the person who wrote it and
5:42
then followed usually by either some kind of group activity or negotiation or whatever other concrete thing kind of the person wanted to go for I think this had a lot of really interesting effects. It gave you a lot of information that you could assess whether you wanted to go somewhere before a session start. You know,
5:55
somebody would post in Slack, everything runs on Slack, their memo for the session, and you could skim it and get a sense of, like, is this actually something that I want? As opposed to the only thing that you have to go off of is, you know, a one-sentence title together with the presenter.
6:09
Another thing that it does is it in some sense reduces FOMO a lot because you know that the core arguments are written down and something that you can engage with afterwards. Of course, a very interesting discussion might happen and it was still a non-trivial amount of FOMO.
6:21
But I think ultimately it gives you the feeling that if you can’t make the session, you can read the memo and then just talk to the author yourself or have conversations with other people about the memo yourself.
6:31
And since this was sanity and survival, was it mostly rationalist topics and rationalist invitees?
6:38
at the time i think this was kind of you know i’ve always felt very confused what to call the members of this very extended ecosystem these days i often go with the extreme mouthful of you know the extended rationalist slash effective altruism slash ai safety slash existential risk ecosystem which is you know the least uh committal
7:00
possible way to refer to this diaspora um so again i think our invite was like we are inviting the people who we think embody the virtues of the sequences and um kind of less wrong and we think have a decent chance of being kind of important to
7:15
humanity’s future and we have established relationships with and have some specific
7:19
reason to trust how do you feel about the stupid sorry i was prejudicing the audience there how do you feel about the term test reel to embody this people
7:28
Yeah, Tesquil. I love it. I mean, I think the best one is the Cosmists. I think the sea stands. I was like, show me a Cosmist. I’ve never in my life met a Cosmist. Apparently, I’m great friends with them. Apparently, I’m like in cahoots.
7:43
Planning how to steer the development of AI to empower me and my tribe with those cosmists. But man, like, those cosmists really are very underground.
7:53
Like,
7:53
they’re so underground that, like, even having worked in the space for, like, 10 years, still have never met one in my life. Right. So, yeah, I mean, the TaskRail thing is trying to solve a real problem. The fact that I’m sitting here and being like, well, you know, the extended rationality, effective altruism, long-termist existential risk,
8:14
AI safety ecosystem is an infinite mouthful. And I can very much understand why people would want some pointer. Because I do think there’s a meaningful thing here. There’s a thing here that is a very large extended ecosystem. In some sense, it’s very closely related to Grey Tribe, where I think people care about that term.
8:31
And I like the term. It’s very useful. It captures it. It kind of carves reality at its joints in important ways. But clearly, that term was what’s created to besmirge and attack and kind of ridicule specific groups of people.
8:44
do you think great tribe no no because great tribe is a little bit too broad that’s
8:48
right i think so like i think great tribe you know i think centrally it’s like paul graham and various parts of silicon valley and the great tribe is much bigger than just kind of the rationality in the community and not all of the rationality in the community and long-termist community and existentialist community and ai safety
9:01
community and adopting test reel would be like adopting someone who can’t termed us the baby killers right it’s like yeah screw you what about i’ve heard just rationalists and various adjacents before
9:13
Yeah, I mean, I think that makes sense in as much as you’re talking primarily to rationalists. The problem is, of course, the reason why you need to list all of them is because so many people’s primary identity will slightly differ and then they will feel snooped and slighted if you imply that they’re just hang-ons to the rationalist.
9:32
Or, you know, I would feel slighted if somebody was like, yeah, and then there’s like the effective altruists and like the surrounding people. And I’m like, no, look, I have my own identity. I have some feelings about that. So I don’t know. I think it’s a hard problem.
9:44
Well, someday we’ll get a term. But the event went very well.
9:47
Yeah, so I think the event went very well. Where was it hosted? This was happening right kind of in 2021. And so this was less than two years after that extremely fun Camp Mercer incident where the ZIS crew like protested outside of the CIFAR reunion or whatever. And so at the time we were like, you know,
10:10
let’s just earn a side of caution and keep the location of this private. And so we booked an event venue, but we were like, we’re going to announce the location, it’s going to be in the extended Bay Area, announce the location two weeks before, please keep it generally on the down low, just as, you know,
10:23
a component of having decent security there so that we don’t get another annoying protest in which SWAT teams show up. Were you there when that happened? well i was on my way there when it happened and then and then the police turned you
10:36
away because i mean basically we got a call like from our perspective it was like we were on the way to the c4 alumni reunion and then people called us and were like don’t bother coming here there’s war teams flying around uh this place is fully
10:48
locked down just go to this other location where people are going to meet up this was like late in the evening and then a few people stayed overnight um at the c4 venue but then basically the next morning like the event was called off
10:59
We had a third co-host for the first half of this podcast’s existence. And Jace was at the event when it happened. Ouch. Yeah.
11:08
Yeah. Did not seem like a lot of fun. And then just to briefly answer the question. And so we kept the location secret, but I couldn’t help myself. The location of the event was the SSS Ranch. and so we of course announced two months in advance the sanity and survival summit
11:24
okay what’s the third s stand for well we have sanity survival summit that’s okay
11:29
okay um and then we arrived and then we’re like wait today it’s such a good name but today they named the event venue it was great So we ran it at a SSS Ranch. It’s a beautiful place. We checked out a lot of venues, and it was actually a substantial influence on Lighthaven in the end. Really?
11:49
How so? I think the key thing that a SSS Ranch really, really showed me is a principle for event running that’s something like I have it cached in my brain as privacy with mutual visibility. where kind of two of the most competing things that you have going on in an event
12:05
is that you want to be able to have one-on-one conversations with other people, but also you really want the environment to be optimized so you can find anyone else. Like you’re at a conference, you’re there to talk to other people. Frequently at conference, you know, I have a list of people I want to talk to.
12:17
It’s very important that I’m somehow able to find those people. And sending them Slack messages, setting up meetings is far too much overhead. Like I think the ideal conference venue is one where you can see where everyone is, but somehow people are still capable of having private conversations.
12:31
The nice thing about SSSS Ranch is kind of somewhat similar to Lighthaven. It’s like five buildings, much, much more farther apart. It’s a ranch. It’s five buildings around like a large field. And the dynamics that it creates is very nice because people tend to hang out on the porches of all the buildings. Right.
12:47
or anywhere in the center where we set up various couches and various tents and various other things. But this means that you can have a one-on-one conversation or have a three- or four-person conversation that’s private because you’re far enough away from everyone else. But people can just look out over the field, look at all the porches,
13:03
and find you wherever you are. And another very interesting thing that happens as a result of that is you actually get a dimension of negotiation about whether it is a good idea or a bad idea for you to join a conversation. that you really don’t get in conference venues where you have things like small
13:17
phone like small one on one conversational booths or whatever where you know you walk around a conference venue and you start walking towards a conversation or you see the posture of two people in a conversation And as you get closer, you can kind of see, as they notice you, are they opening up their conversation?
13:35
Are they staying in their body language focused? And actually kind of creates this 15, 20 second negotiation where you’re not interrupting your conversation really necessarily at all. you’re not interrupting their privacy, but you still kind of via your extended body language from like 10, 15 meters away, have a negotiation about how open the conversation is,
13:55
how excited are the people for your joining. And so it would happen quite frequently that like, you know, I would walk around the venue and I would see two or three people that are quite like, talking in the distance i would walk towards them i would kind of judge is the body
14:07
language open mr body language closed and then you know a solid 60 of the time i’d be like no i think those people kind of want to stay within their conversation and 40 i would be like oh yeah i can see how like the body language is kind of more open as they didn’t kind of
14:21
as they noticed me out of the corner of their eye they more directed their face towards me waved to me and that kind of that was actually really important dynamic for creating kind of this feeling of like there is a group of people that i’m successfully having a shared conversation with while really all the individual
14:37
parts of it can break up and refactor And kind of another thing that is related to that and kind of mentioned earlier about FOMO is one of the other things that we did is like we basically told people that we don’t want to have any sessions above nine to 10 people.
14:50
So like everything was very optimized that like everything was in parallel and we tried really hard to just like avoid anything that enroll large groups of people kind of partially in a principle of just like if you’re sitting in a large group of people, you’re not participating, you’re not talking, the conversation isn’t really optimized for you.
15:04
It just seems better to be in an actively engaged environment.
15:07
Doesn’t that require a certain amount of social awareness, which people are not very famous for?
15:14
don’t know i’ve just kind of never ran into that problem that much like in context of just like where the body language can really be read from quite far away and like we of course tried various things to make things more explicit like i feel
15:25
like the description that i gave was a bit more guest cultury like we also had an opening talk in which we like explicitly role played four different responses you can give if somebody asks you whether you can join a conversation
15:39
Oh, where we were like, that’s how they did that at vibe camp. And I found that really valuable. Nice. I can read social cues, fortunately, most of the time. But what it did was it established a guilt free and hassle free way of, you know, here’s the nice,
15:53
here’s the way that you can say yes or no to people asking if they can join your group. And here’s the way you get up and leave. And then no one’s feelings can be hurt. And as far as I know, no one’s feelings were hurt. It was great. Yeah, setting up those norms is valuable. I don’t know.
16:05
I don’t want to oversell that problem as something that our community has a huge struggle with. Some of us are quite adept at it and some of us aren’t. And I think we probably average somewhere in the middle. Yeah.
16:16
Yeah, so I do think the specific things that we said at the beginning was, yes, you can join, but please fishbowl for 20 minutes. I think that’s kind of one of the most useful technologies. Fishbowling is a term for just, like, there is a container. You can be outside of that container and look into the fishbowl, but,
16:30
you know, don’t tap on the glass. Don’t try to disrupt the container that’s going on. Having that as an affordance really helps. Also, of course, being like, no, I think I prefer to continue this conversation in this group. Or being like, yes, please join.
16:42
I think having that explicitly called out and roleplayed at the beginning is a bunch.
16:45
How did that turn into the idea for Lighthaven?
16:48
So I think at the end of that event, we were kind of specifically thinking about what kind of in-person infrastructure we wanted and how to kind of think about this whole extended ecosystem at an in-person level. And we just at the very end kind of had a talk that was like, should we build a campus? You know,
17:03
you all just spent one and a half years of your life kind of alone and in your homes and in in much smaller contexts of course we understand there’s going to be biased hard to give a clear answer to but like would you be interested in just like joining and
17:16
like being deeply involved with like an in-person campus that we built out of the people that are present here and kind of the broader extended ecosystem I think like of the people who are there, I think it was like 75 to 80 percent were like, yes,
17:28
I think I would like very seriously consider just like substantially changing my life and substantially changing like where I live if the infrastructure was there. I think, of course, lots of people have different opinions. And this was all kind of in the middle of myriad of time was considering where to live and where to move.
17:43
They were considering like moving to New York, moving to various other places. And so there was, of course, a lot of conversation about, well, if such a campus existed, where should it be?
17:51
Why did you settle on Berkeley?
17:53
Everything else is impossible. Network effects are just too strong. Even after the pandemic, I think just being anywhere else, we actually did quite a bit of experiments. One of the places that was the most interesting and in some sense is like being revived right now with the Freedom City discourse is the Presidio.
18:10
where San Francisco has this whole area north of San Francisco that’s federal land. It’s a beautiful park. It has these buildings that you can rent. You can’t really purchase them. We’re going to buy long-term leases from the federal government. We considered making that the location of the campus. My favorite plan was locating it on Treasure Island.
18:29
It’s so perfect because so many people we talked to were like, oh, I really wanted to be in San Francisco. The other people were like, oh, I really wanted to be in the East Bay, conditional on being in the Bay Area, which is also a question that we thought quite a bit about. But kind of within that,
18:42
I liked the idea of just Treasure Island and then building a large water base where people go via boats to cross the Bay.
18:49
For people who aren’t in the Bay Area, there’s an actual island here called Treasure Island, which when I found that out, I was like,
18:56
this whole place is what are you even doing i was like maybe we can build like a large like tower that looks like a skull and then you can have the boats drive into the skull bay because clearly if you are on treasure island like you really got to
19:11
lean into the pirate theme right well and there’s probably some treasure there to help fund this project i mean yeah yeah it’s kind of halfway between san francisco
19:20
and berkeley that’s right exactly halfway in between um
19:23
I have heard at least a couple people say that the San Francisco and the greater Bay area is kind of like the modern day Athens. This is like where all the major thinking is happening right now. And anyone who really wants to contribute basically moves here.
19:36
yeah i mean i do think to be clear like new york obviously continues to be like a really big major hub for a lot of thinking um no doubt and of course in media and various things like we do have la and we also in europe have london and various
19:49
places but i do think that like especially in as much as you care about ai sf is just like and the bay areas obviously like the place where things are happening and i think that just made it very hard to consider everywhere anywhere else um And, you know, we talk to people, but, like,
20:02
I think the network effects were just extremely strong, especially just, like, if you think about the rationality community and kind of the extended long-term community and so on. Like, it’s not that insular. You know, it often gets described as insular. But, like, of course, people have jobs. People have extended family relationships, all of which just, like,
20:20
really ties you and kind of work like roots in a physical location that’s very hard to move away from after people have lived in a place for many years.
20:28
It does also make it hard for other people to move here. That’s right. As I have discovered, but you know, it’s also, you kind of have to because that’s where everyone else is. That’s right.
20:38
And then we did some experiments. We kind of did a fun prototype where we created a whole website and prospect kind of, as I mentioned earlier, and like Amazon PR FAQ style of like, what would the you know, metaphorical press release for announcing this project looked like.
20:51
We had a whole prospectus with an FAQ for a campus in Presidio. But the East Bay within San Francisco, of the people we most wanted to work with, did seem like the place where most of the people were. And then we really tried to just get prototyping and try to falsify it as quickly as possible.
21:08
And then we started the Litecoin offices, Oh, did Lightcone not exist before this? So I started thinking more broadly about what we wanted to do and noticed these bottlenecks in around 2020. And then indeed, the name Lightcone Infrastructure was announced jointly with the Sanity and Survival Summit.
21:25
So we kind of send out the announcement and we’re like, hello, we’re no longer just less wrong. We’re the Lightcone Infrastructure team. that kind of name reflects the mission that we have which we’re trying to be broader and it’s trying to be kind of about taking responsibility at a broader
21:38
scale at that point kind of light cone existed as a distinct name and the other reason why we really needed to rename to light cone is as part of starting to work on things that was not less wrong it just became very hard to refer to the team within LessWrong, the organization that was working on LessWrong,
21:57
you now had three different things that people meant by LessWrong. Like I would talk to people and be like, add LessWrong. And then people would be like, oh, so you mean the website? And be like, no, I mean the organization that runs LessWrong, which also has the name LessWrong.
22:11
And then I would be like, on the LessWrong team. And then I’d be like, oh, you mean the organization? And I’m like, no, no, I mean at the team that is part of LessWrong, the organization that runs LessWrong. Right. It became even internally impossible to refer to any of the things. So, of course,
22:27
the very next thing that we did was to create an organization named Lightcone, launching a product named Lightcone with a team working on the Lightcone offices, having learned our lesson. I somehow did not notice this when I made this announcement, and then three months later was indeed like, wow, yeah,
22:44
it is impossible to refer to the team at Lightcone that is working on the Lightcone offices. Same game. That naming thing was actually genuinely one of the big reasons for why we renamed ourselves. And then we ran the Litecoin offices for a while, but it was kind of always intended as a test.
23:00
It was a very short-term leases. We basically took three-month leases and we rented from a WeWork in downtown Berkeley and transformed the whole floor of that WeWork into a co-working space and place for many researchers and various programs to run. Tried to falsify various ideas about how to kind of structure that kind of in-person infrastructure,
23:17
various forms of gatekeeping, what was a good choice, what was a bad choice. And then FTX collapsed. And then we went through a few months of deep existential crisis about how we want to relate to our whole ecosystem and whether the kind of responsibility we took on
23:32
kind of in the transformation to Litecoin infrastructure was the right choice. Because indeed it kind of, in as much as I want to take responsibility for this whole ecosystem and the whole kind of extended pipeline that we’re involved in, I think that also created kind of a natural sense of like, oh,
23:45
Does this mean I am therefore responsible for FTX? Like explicitly and as much as I want to be responsible for the positive outcomes, I feel like I should now have a relationship that also is taking seriously the negative outcomes. And that created kind of a whole period of deep reflection and trying to
23:59
renegotiate our relationship to kind of the extended ecosystem that we were part of.
24:03
Wait, did you think that you did have some responsibility for FDX? Oh, totally.
24:07
So many things. So many. Why so? So many.
24:09
Like, you weren’t involved in any of the business decisions.
24:12
Yeah, sure. I definitely wasn’t there being like, yes, Sam, let’s add the go negative, allow negative flag. But yeah, I mean, I’ve written about this kind of in a bunch of online comments and forum comments and restaurant comments. I think at the core of it, the things that I regret the most was like...
24:32
So I think the core of it is like, ask yourself, why was FTX as harmful as you did, as it was? I think one story you could tell is that FTX, you know, was a cryptocurrency exchange in which the CEO decided that it was okay to
24:45
basically pretend to have money that he doesn’t and then use that non-existent money as to take out various loans, end up over leveraged, basically spend people’s customer deposits. Definitely a core part of the story and in some sense, the most important thing to understand.
24:58
But I think there’s a question of why did he have so many customer deposits in the first place? Why was he capable of doing things under so much leverage? And why was he capable of being in that position despite at various points doing various things that already looked slightly sketchy or shady in a way that people
25:15
didn’t properly notice? And I think it’s really important to understand that FTX was very widely known in the finance world and the politics world as the responsible crypto exchange, as the legitimate crypto exchange, as the crypto exchange run by grownups.
25:30
I think some of that was like down to the charisma and kind of the way Sam and a few other executives portrayed themselves. But I think a non-trivial fraction of it was because FTX was vouched for by many people that others really trusted. There was a trust network there.
25:44
There were people who vouched for Sam on a kind of repeated basis that ultimately just very solidly in the eyes of many moved Sam out of the potential reference class of just like a reckless crypto offender, but just like somebody who knows what they’re doing, who’s a grown up, who’s responsible.
26:00
I think the effect of autism component was really quite substantial here. The fact that he was donating so much, the fact that there was all of this endorsement from the substantial effect of autism community, and a lot of the people that he was working with,
26:12
who then ultimately ended up the kind of people that were in some sense possible to be dragged into a political conspiracy. One of the people that ended up trusting Sam when I think they deeply regretted it was, of course, a lot of the core Alameda and FTX staff, all of which were hardcore EAs and hardcore long-termists.
26:29
And I think like another thing to understand is I worked at CA and I think I talked four years ago. So that was like, I think a year or two after I must have left CA.
26:37
Real quickly, CEA is?
26:38
Center for Effective Altruism. And so I think what I have to understand is like, I left CEA in early to mid 2017. My CEO at the time was Tara McAuley, and my board member just around that time was Sam Eggman-Fried. They basically almost immediately, like within a year or two after I had left, like late 2017,
26:58
early 2018, I think maybe into late 2019, uh there’s a lot of complicated politics a lot of complicated inside baseball dynamics but basically like the organization that i had helped build which was the center for effective altruism and the effective altruism outreach team and various things like this the leadership of that organization then basically left recruited
27:19
about 10 20 of the most competent people from the effective altruism community and founded alameda research This founding story of Alameda Research was the leadership of Descender for Effective Altruism deciding to found Alameda Research.
27:32
Wow.
27:33
Later on, there was a big fallout, and I think around 2019, 2018, approximately 50% of the relevant staff quit, importantly out of concerns for the character of Sam.
27:43
Yeah. But at that point, you had already left. Why do you feel personal responsibility for this? Like, were you one of the people who said, I think Sam Minkman-Fried is trustworthy? Yeah.
27:52
No, I had founded the organization that had empowered at the time, one of the co-founder, like Tara McAuley, who was my boss at the time, and also a close friend of mine for many of those years, had empowered her to be in a position via kind of being involved in founding that organization, given a lot of legitimacy.
28:09
I ran the first two EA Globolts, which in some sense, I think of them in some sense as the events that established the modern government of EA. Okay. Kind of before EA Global 2015 and EA Global 2016, the EA summits had been run by leverage research, but it was always a very contentious issue.
28:27
They were kind of a niche group.
28:28
So it was kind of your work that helped create all this.
28:31
That’s right. I think like in a very concrete way, like just like I had empowered many of the relevant people, I had created a status hierarchy and created a culture and community that ended up funneling talent towards this. And I think many of the individuals took the right action. Like, as I said, many of people left
28:46
But it just wasn’t enough. Despite many people leaving, there wasn’t an appropriate follow-up. FTX still ended up very highly trusted in the ecosystem, despite us being by far in the world the people best positioned to notice that shady things were going on. Having had a high-profile dispute with Sam Bankman-Fried about him being kind of reckless with money,
29:07
we really had enormous opportunities to notice that, do something about that in time. And so I think, you know, had a responsibility to therefore prevent some of that. But also beyond that, just very, very actively lend our name and our brand and our identity and our integrity to the support of FTX.
29:23
And I think basically out of financial self-interest. I mean, the amount of money was truly staggering. He was by far, I think maybe the person in all of history, but at least the person presently alive who had become the fastest billionaire.
29:33
So you think since the organization was basically executing a lot of the initial programming that you helped put in, you would have probably fallen into a similar trap for not having enough safety guard?
29:45
I mean, I left CA because I thought it was a pit of snakes. Oh, okay. Like, it’s not that I...
29:53
Well, then you really shouldn’t feel like you’re that responsible for it. You can’t take over the entire organization and re-aim it once you— So one of the
30:00
things that I could have done, and I think it was actually one of the things that I did a bit too late. The last time I was actually on this podcast, I think it might have been—I did two podcasts in a relatively quick situation where
30:11
I actually first talked about my experiences at CEA and why it felt to me like it set things up in a very dangerous way. The thing that I regret the most and where I think I deserve the most blame is indeed to just like have these extensive concerns and just never write anything
30:27
publicly about them until like 2021, 2022. I think it was a huge betrayal of kind of like what our ecosystem was about and the trust network that we had. I did it because many people over the years kept telling me that it was a bad idea, that it would cause lots of public drama.
30:40
that it would like draw lots of attention in unproductive ways because, you know, various people might get angry and in dumb ways, which I think I have a lot of sympathy for. But I do think that was just a choice that I think caused an enormous amount of
30:52
harm or alternatively could have caused an enormous amount of good if I had made it differently.
30:56
I think that this has been the downfall of more than a few movements where don’t say critical things because it’ll hurt the movement. That’s right. Yeah.
31:04
Yes. It is a thing that these days I have extremely intense immune reactions to. And whenever in my extended environment I now see people making that argument, I react with much emotion. But at the time I believed it. Yeah. i think to be clear there are good arguments for it like i think scott alexander
31:22
has historically written quite a bit about how indeed there are mobs there are social movements and i think in particular some of the radicalized left that has historically been out for blood and trying to cancel people and In that environment, revealing information that sends the mob on you is dangerous and is something that
31:41
I do like it when my environment doesn’t generally do. But kind of FTX really showed me what the cost of that is in as much as you also end up in a position of just like large amounts of power and influence over the world. Then FDX happened.
31:54
If we had a bit of an identity crisis, I was trying to understand. I think the other big thing that’s kind of really important to understand about the FDX point is just, I updated really hard that the concerns that I observed in our extended ecosystem were not just things that seemed bad to me from the inside.
32:12
As I said, I left CEA kind of being like, well, this is a pit of snakes.
32:16
What made you think it was a pit of snakes specifically?
32:18
For example, Tara had become like, I had a conversation later after I had left the organization with her, but basically she had become CEO sometime in like late 2016, but she didn’t tell anyone on the US team about that fact, which is a very weird thing to be like, how can you be CEO?
32:38
But like, you know, Will McGaskill was official CEO. He had basically, he was launching Doing Good Better and so was kind of busy with various book promotions tour and various other things. de facto tara had started being ceo and within the uk office kind of being de facto
32:52
referred to as executive director as will was reprising to other stuff and had his own team but she’s kind of described it to me as intentionally creating an environment and she was she was leveraging the fact that the us part of the organization did not know that fact such that she would have conversations with
33:08
people in which they underestimated her and therefore she would have more ability to like notice who was plotting against her in various ways. People within the organization plotting against her? Yes. Why were they plotting against their own CEO? Why would you plot against your own CEO?
33:23
Well, you know, first of all, half of the organization consisted of spies. What? government spies or what kind of spies leverage research spies oh my god um yeah i mean it’s important to understand that kind of like the way ca started it’s just a
33:37
weird way we’re like leverage research kind of started the ea summit and then was involved in various early ea activities in 2014 after they ran the big ea summit they had a conversation with at the time 80 000 hours and a few other people
33:51
involved with kind of the uk side of effective altruism that it might make sense for them to take over some of those things and run kind of the next big EA conferences. But there didn’t exist a thing like CEA. There happened to exist a legal umbrella called the Center for Effective Altruism that nobody had ever referred to.
34:08
It was just an entity that people made up so that like you could have given what we can 80,000 hours under the same organization.
34:16
But it was a legal organization at this point, right? That’s right.
34:18
It was a legal entity, but it didn’t, for example, have an active executive director. It didn’t have any full-time staff. It was just a legal entity that existed to be a charity. Having umbrella organizations is not particularly weird. It’s basically a conglomerate. But it meant that it itself was not really an institution.
34:33
It was just kind of an umbrella organization for giving what we can in 85,000 hours and maybe one or two more other organizations. But then Center for Effective Altruism wanted to start doing things like the EA summits and running conferences. And so Neil Bowerman, who later on started working at FHI quite a bit,
34:48
and I think before had also worked at FHI a bit.
34:49
That’s the Future of Humanity Institute.
34:51
That’s right. Took on the role of first, I think he was first, like executive director of the Center for Effective Altruism, but with basically no staff. And then he reached out to Kerry Vaughan and also James Norris. Whole fun story there with James Norris and the first EA Global, where I had to
35:09
in my very first job while i was still in college had to fire the event director for ear global two weeks before the event because it wasn’t working very well and he basically wasn’t doing his job except um it was complicated he recruited carrie
35:23
vaughn and carrie vaughn was relatively quickly ended up very close to a lot of the leverage crew and then he recruited tyler alterman who recruited peter buckley who tell up and then also recruited me And kind of the whole organization was remote. And a lot of the movement building activity that started happening under the Center
35:39
for Effective Altruism umbrella started happening in the Bay Area. And within the Bay Area, a lot of their talent and attention started being relatively close, being drawn actively from leverage research. However, the rest of CEA really, really hated leverage research. And so a lot of the people who were either being semi-recruited from leverage
35:57
research or applied to the team or were very sympathetic to it kept the fact that they were very close to it in generally… private and secret from the rest of the organization and then because CA basically didn’t really have an organizational structure many of those people then took
36:12
management from people at leverage because you know they wanted people to pay attention to what they were doing and so they would have someone at leverage research that they report to while also technically having someone at CEA that they report to and then would start living at leverage and receiving salary from them, of course,
36:27
keeping all of that secret. And those people would then often internally at leverage research referred to as the leverage CEA spies, which I think was basically accurate as a description of what they were doing. That, as you can imagine, was not an amazing organizational context in which things happened.
36:42
To be clear, it was actually one of the most productive teams I was ever part of. I’m still quite proud of what we did in terms of just logistics and achievement of the early eGlobals 2015, 2016. What was Leverage doing with all these spies? I mean, you know, influencing it.
36:57
One of the things that brought things to a head as a concrete example was the Pareto Fellowship. Pareto Fellowship was a fellowship that the Effective Autism Outreach Team, the US-based movement building team at CEA, launched in 2016. It was huge. I think it got like 8,000,
37:12
10,000 applications from like really a huge fraction of just like the world’s most talented people. It had extremely good branding. It was like very much very centrally rooted in EA and like a framing of EA that was very popular, very viral. And I was kind of helping with that quite a lot.
37:27
Tyler Altman was the person who ended up running a lot of it. And multiple times I was like, Tyler, are you doing all of this as a leverage recruitment funnel? And he was like, no, definitely not. He was definitely doing it as a leverage recruitment funnel. Like it became very clear.
37:39
Something I talked with Tyler a year or two ago, he has left leverage and also kind of regrets many of the things he did at the time. But he was very much like, yeah, like the program itself basically was a copy of the leverage research training and onboarding
37:54
combined and the interviews were basically kind of a lot of the leverage interviews and yeah i think they attracted an enormous amount of like very capable talent that was worth like a huge amount to them as a result of like the activities of their spies
38:08
Okay. I was hoping maybe someday in the future to have an episode on leverage, but since we’ve talked about it so much right now, can you maybe give a quick two, three minute overview of what leverage is for people who have not heard of this before?
38:19
At my old group house, we used to have a timer. We had three timers. I don’t remember what the third one was about, but one was the leverage research timer and the other one was the consciousness timer. As soon as anyone mentioned leverage research,
38:32
or consciousness you would set them and they would be i think 15 minutes and 30 minutes and when the timer went off you had to stop because otherwise the debates about what exactly consciousness is and what mineral patienthood consists of or the infinite storytelling about the weird shit going on at leverage research would
38:49
consume all conversations yeah so you know i’m luckily these days leverage is much less active and i’m much less worried about that but definitely it was a topic that for many years It was very interesting and kind of had a very juicy, gossipy nature to it.
39:01
I actually didn’t realize Leverage was still going. I haven’t heard anything about him in a while.
39:04
Yeah, I mean, these days are very small. Almost all their staff left. I think it’s four staff left.
39:09
What was their goal? What was Leverage doing?
39:12
I mean, I don’t know. I think like roughly the story that I have is Jeff Anders was a philosophy PhD student and I think subsequent professor who really loved Descartes. And then at some point, try to figure out what good reasoning looks like from first principles.
39:26
He started writing a series of blog posts and research papers on the internet and then decided that psychology research ultimately is the right way of figuring out what effective both institutions and effective minds should look like. In some sense, a very rationalist mission.
39:41
Was this before the replication crisis?
39:44
Yep. One of the things that I think actually leverage kind of most get space points from me is I interned there in 2014. And I had many debates about the replicability of cognitive science. And that was just at a very initial phases, but mostly solidly before the replication crisis. And they just were completely right.
40:03
I was like very cognitive science pilled. And they were very much like, I think they basically had to get the rough things right. They were like small effect sizes using the statistical methods of cognitive science, completely useless. There’s no way you have correctly detected a five to 10% improvement in this thing.
40:18
If you want to do interesting cognitive science, you want to look at large effect sizes. The whole philosophy of everything, what they did was always, We want to do psychology research, but we are interested in large effect sizes, not small effect sizes. We want to find things that, like, improve the productivity, improve the output,
40:34
improve the robustness of concepts by 2x, 3x, 10x, 30x. And, you know, the variance of human performance is quite wide. So it’s not like you have a 0% prior that, like, there exists such methodologies. Especially if you extend that to include forms of social organization,
40:48
where it is very clear that there exist companies out there that are 100 times more productive than other companies. And it’s reflected in things like market cap and revenue and other things. And so that was kind of the philosophy in which they started.
41:00
For many years, they mostly recruited people to sit and write Google Docs to each other. Long Google Docs, thinking about psychology. Generally, relatively not very experimental statistics-oriented, But they did do experiments on things that looked like they would have large effect sizes. For example, famously, the experiment in the rationality community that I think has resulted in the
41:21
largest numbers of disclaimers and warnings, which was the leveraged research polyphasic sleep experiment. Genuinely, I think a valuable contribution to the state of human knowledge also just drove a bunch of people really crazy turns out like polyphasic sleep man actually doesn’t work well it works for some people like Matt Falscher is one of
41:41
the people for many years hung around a leverage orbit I think has hunted them a non-trivial amount has also been involved with Miri and various other organizations like he is has been polyphasic for many years just like solidly like healthily productively he runs Bellroy which is a large successful wallet manufacturing company
42:00
But you can’t hack yourself into it.
42:02
It seems that at least about 50% to 60% of people will just have literal hallucinations if you try to force yourself to stay on the polyphasic sleep schedule.
42:13
I saw someone in my life personally have a bit of a breakdown trying to do polyphasic sleep. Yeah.
42:18
So yeah, so like, you know, they were definitely, you know, walking the talk of like, we want to find large effect sizes. And you’re like, well, I think polyphasic sleep, like, I think it’s pretty plausible. You know, the basic argument for polyphasic sleep is that at least when we observe people who have multiple sleep cycles,
42:32
there’s a rough argument that you end up with substantially more REM sleep cycles, which generally is associated with the things that you need most for sleep. And therefore, if you have more like three, four, five sleep cycles in a given night,
42:44
you might end up needing as little as 50% as much sleep as you would need if you were sleeping for one long block, as most people do.
42:50
It just doesn’t really work for most people.
42:52
That’s right.
42:52
So trying to improve the efficiency of humans sounds pretty benign through research. Why is there so much lore about leverage?
43:00
Yeah, well, first of all, you know, you’re putting a bunch of people together in a house to do psychological experiments, mostly on each other, aiming to have the largest effect sizes possible.
43:11
Sounds like a Vault-Tec setup.
43:15
Interesting things happen. some of the things that happened is there’s another thing that i think ended up being very costly for leverage that i think actually they kind of had a point as i later on thought more about other contexts even feeling in the case of leverage was
43:28
quite misapplied where leverage was like we really care about improving humanity we don’t just want to own the marginal power humanity like they bought many of the arguments for anthropogenic existential risk anthropogenic catastrophic risk We’re just like right now, it is not clear where if you just increase the productivity of a bunch of smart people by 200%.
43:47
I think that’s cool. I think overall, it’s a pretty good idea. But you’re not obviously, obviously making the world better. And as much as you’re worried that a large fraction of the future will be lost because humans are doing things that are dumb.
44:02
And so if you just make them more productive or make it easier for them to do things more quickly without allowing them to do things more wisely, you’re not necessarily improving the world. And taking that argument seriously, they were like, whenever we develop cognitive technology, whenever we try to do any of these stuff,
44:16
we’re going to do it privately with a very, very strong confidentiality by default norm. And so whenever anyone did any experiments at leverage, whenever anyone tried to do various things in the space, the techniques and the experimental results and the things that they experienced would generally be considered very private.
44:30
that itself i think creates a context that is very cognitively scary and draining for people you’re there you’re trying to do weird crazy things with your mind and you can’t even talk to anyone else except your boss and your reports and your
44:44
colleagues about it i think that itself kind of created a lot of drama and a lot of dynamics people lived in the same house i think that made sense and as much as you’re already insular i generally don’t mind shared living arrangements i think in many contexts it can work pretty well But I think in that context,
44:58
specifically aiming to do psychological experimentation, I think would have helped quite a bit to have a bit more distance, more grounding in other parts of the world. And then another component was, I think, Jeff Anders is, I think, very influenced by what I think of as something like the Thelian,
45:12
as opposed to the Paul Graham school of politics. Silicon Valley roughly has two schools of politics. where i think the paul graham school of politics tends to be very much one of don’t worry that much about the adversarial dynamics build things that people want that’s
45:28
the central motto of y combinator you create value you of course try to do a reasonable attempt at like capturing some of that value but like try to avoid the part of life that can be dominated by zero-sum competition by conflict by people trying to like fight with each other over resources
45:46
build things, do a reasonable attempt at trying to capture the surplus. In the long run, the archivistry will reward the builders. Peter Thiel ran PayPal, one of the most successful and, like, famous company that he has run, maybe next to Palantir now. PayPal is, like, very, very famous for having an extremely, extremely intense competitive culture.
46:07
It’s in 0 to 1 where at some point Peter Thiel realized that maybe he should change something about PayPal, as I think they were fighting with X.com, which at the time was Musk’s company and a few other people, and they told them they merged. They had such a competitive dynamic, if I remember correctly,
46:23
that at some point Peter Thiel walked into a room and in order to defeat their competitors, saw two of the managers talking and assembling a literal pipe bomb that would be used somehow. It’s somewhere in zero to one. And he was like, he noticed that the idea of war between companies was escalating all the way to
46:43
just begin to go all the way with legal threats and But also, yes, people were considering straightforward acts of terrorism in order to defeat their competitors. Some solid cyberpunk shit.
46:54
So it sounds like that book should have been called Zero to 100 because, I mean, healthy competition is one thing, but pipe bombs, that is another level.
47:02
Yeah, but yeah, I think the competitive culture of PayPal and Teal is very different. It’s very much a the world is controlled and run by people who are willing to fight for their interests. If you’re not someone who’s willing to fight for their interests and do what it takes,
47:18
I think you do not really have as much of a place in it. Well, Twigly, I think Teal is like he’s not like a Darwinist or whatever, but like he’s just like the willingness to fight with everything you have for the things that you care about is a core component.
47:31
and being willing to pull out all the stops, make threats, be adversarial, leverage secrecy, leverage information flow. Teal thinks a huge amount about meta social dynamics. He is very into scapegoating dynamics. He really cares about what are the dynamics by which society allocates the blame. And in some sense there’s an undercurrent that’s kind of Machiavellian here,
47:53
that’s of course like, if you know how society allocates blame, you might be in a position to both do blameworthy things and avoid the blame being allocated to you. If you know that the world runs on scapegoats, it is very tempting to do things that in some objective moral sense would cause you to be scapegoated,
48:11
because you know that the world runs on scapegoats and not actual blame allocation. and so as such you there’s of course both a sense of unfairness that comes with a worldview that is kind of heavily grounded in these ideas of justice tends to be an emotional expression tends to be a collective scapegoating a kind of lynching
48:29
dynamic more so than there’s an allocation of responsibility that i think kind of produces an adversarial relationship to the world and leverage was very much kind of to finish the loop a tealian company jeff had worked with teal himself quite a bit not like worked
48:44
with him but like teal was one of the primary funders of leverage research for many years i think he had interfaced with teal a non-trivial amount and i think that produced a dynamic that was very ill-fitted for the effective altruism world which was very very much running on a paul graham school of politics
49:01
in which the thing that you do is you keep the politics away from you. And so you had this institution that was in some sense involved in a founding, involved in a creation, and was now surrounding its central leadership institutions and infiltrating them extensively with its spies who had a relationship to conflict that was very different.
49:18
There was a much more of an endorsement of conflict. Good institutions know how to think about conflict, take it as an object. And that created a huge amount of conflict. And I think that itself made it a very interesting topic to talk about.
49:28
I’ve heard quite a few people refer to leverage as basically a cult. We’ve had a few cult offshoots from rationality and leverage is one of the ones that’s often identified. Do you think, is it reasonable enough to call them that because they’re close enough to it?
49:41
So I don’t know. The experience that I had, like, Leverage built up a bunch of quite justified antibodies against being called a cult. And I think something between 2014, I think they were started like 2012 or 2013 to like 2019. I think the organization had many of the things that like one could think of as somewhat cultish,
49:59
like many of them lived in the same house, not all of them, but many of them.
50:02
They did psychology experiments on each other. They did my psychology experiments on each other.
50:06
But at least during time, the sense that I got is like Jeff in some sense was pretty solidly in charge, but I did not have a sense that Jeff had an exploitative relationship to the rest of the organization. I’ve heard that this later on kind of changed in various ways. You know, I interned there.
50:20
I was friends with many of the people there. I often went to their parties. The people there felt very self-determined. They did not feel like there was… I mean, there’s definitely a non-trivial amount of hero worship for Jeff and kind of the figurehead. But like, I don’t know, in some sense to quote Teal,
50:34
Teal himself is on the record saying that every successful company is a cult. I think it was cultish in the sense that Teal meant at the time, which is like, it was intense. The people were all in on the organization. The people had strong bonds. The people did not have a work-life balance.
50:49
The people really, really like invested everything in it. And that came with a bunch of really fucked up dynamics and But I don’t think it was like a cult the way that like people think of as like Heaven’s Gate or like the cults that drive people completely insane.
51:02
I do think basically sometime between 2018 and 2020 or something, leverage basically became an actual cult.
51:10
And then they’ve clawed their way back out of that since?
51:12
No, I think basically they collapsed violently. And now there’s an organization that calls itself Leverage 2.0. Leverage went from having 35, 40 employees to having four. It’s not really the same organization. It’s not really the same institution.
51:25
Well, that was fascinating. And we have gone on for quite a while. So I want to bring us back to our original topic, which was Lighthaven. That’s right. But wow, that was fantastic.
51:36
Thanks. The only thing that I’d say that differs that from a cult… there’s difference between participating in novel psychological experiments and being subjected to adversarial psychology that’s right and i think that’s one of
51:46
the things that really changed between 2018 and 2020. i do think people were doing things but like i don’t know i kind of exposed i kind of would have regretted like participating in the polyphasic sleep experiment but like nobody was subjecting other people to polyphasic sleep in order to make them
52:01
more like suggestible to the influence of the rest of the institution Whereas, for example, when I’m looking at things like maple and a bunch of things that are currently happening in our extended ecosystem and also in general things like Buddhist religions and monasteries, it is a very explicit part of monastery and Buddhist monastery thought that people
52:22
sleep less because that makes them more open to the ideas exposed in a religion and makes them more suggestible and that is seen as a feature. Whereas the sleep deprivation that was going on there was to make people more productive, people were in charge, people could opt out when they wanted to.
52:35
what’s maple i have no idea what it stands for i don’t know whether it stands for anything it is a monastery by some vaguely rationalist adjacent people who are also kind of vaguely buddhist it seems basically like a cult that one seems just like very straightforward boring extended bay area west coast cult
52:55
I think I met someone from Maple Light Burning Man. All right. So you wanted to create Light Haven, so there would be a physical place for people to meet together and be in this intellectual milieu and talk to each other, bounce ideas off each other, that kind of thing.
53:10
Yeah.
53:10
So roughly in a timeline, we founded a light kind of offices, then FDX collapsed. We had a bit of existential crisis, but also before FDX had collapsed, we were thinking about, you know, doing something more campus like expanding, expanding our experiments. i just straightforwardly did a grid search across east bay where i just went in
53:27
google maps and i looked at every single property does this feel like a good place where i would like to create something that is more permanent if we want to purchase the property or make a long-term lease i found eight or nine candidates most of them of course weren’t for lease we even approached properties that weren’t
53:42
for lease as we could lease it by it the remaining candidate was there was a previous religious school now just like an office space and still part of religious school and for a while was considered being turned into an retirement home up in the hill and the rose garden and that’s like the two places that we were
54:00
considering and then we successfully had approached the owner of the rose garden and i found out that we could convince him to sell
54:06
I heard you had a couple, at least one event, maybe more than one event here before. That’s right.
54:11
We kind of all throughout this went about everything we did in a very, very lean and like try to falsify what we’re doing as quickly as possible. And so we kind of did quite a weird thing. We just reached out to the owner and we were like,
54:23
let’s get negotiation on whether we want to buy this place or not. But we would like to rent out your whole hotel for two months. We then started running events. But not only that, we were like, well, we know that if we want to buy this place, we really need to do a ton of renovation.
54:36
At the time, this place was really decaying very badly.
54:39
Yeah, it was like a wreck from what I heard. That’s right. It had been neglected for years.
54:43
Yeah, it had been neglected for years. It went from like a 3.8 TripAdvisor rating to like a 2.3 TripAdvisor rating over the course of four years. A 2.3 average TripAdvisor rating is not good. The average review would be like the spring that came loose in the middle of the
55:00
night of my mattress stabbed me in the back and left me bloody. As I tried to leave my room, the rats scurried out of my bed and I had to walk through the puddle that had formed itself as the rain flowed into my fireplace that was not properly shielded
55:17
from the rain out into the hallway between my bed and the door. yeah it was very bad it was very bad we definitely knew we needed to do a bunch of renovations but we had huge uncertainty this place is extremely weird of how it was
55:29
constructed and how it was built and what kind of property it is and so determining how hard would be to renovate it was a huge piece of uncertainty for us So we also were just like, we would like to rent this place and we would like to start without having
55:41
purchased this place to just start renovating a bunch of the worst rooms. Oh. And he was like, that sounds extremely weird and like something that maybe you will then just leave giant holes. But, you know, we negotiated with him. We figured out how to build a trust relationship and we started doing smaller work.
55:55
We introduced him to our general contractor, which we met at the time and started working with and build a trust relationship. And eventually we’re able to, before we needed to put down any substantial fraction of either the renovation money or make the full transfer happen, have the ability to see what renovation things would look like.
56:12
That was a huge benefit. I don’t think we would have been able to take the risk of this place if we hadn’t done that. But yeah, so we purchased it. I think somewhat ironically, the final purchase agreement was signed on November 8th. The day FTX collapsed was November 6th. Wow.
56:29
So I had a very fun weekend where clearly the finances and the amount of money available for various things like air safety and existential risk and rationality and various things like this was drastically changing just over kind of the days, the final days where we had the opportunity to pull out and revert everything that we had done.
56:46
What made you decide to, like when you see FTX collapse and literally hundreds of millions of dollars, some fraction of that, you were assuming we’re going to pay for all this and you saw all that disappear. What made you still decide to go ahead and sign the paper?
56:58
In some sense, Lighthaven is one of the lowest trust projects that we have worked on.
57:03
Really?
57:04
If you think about LessWrong, in order for us to do the kind of work that we’ve historically done on LessWrong, we really need donations. Like, it is the kind of thing that needs to run on people’s philanthropic instincts. And, you know, it’s a very difficult software engineering project.
57:16
And I thought a lot about whether there are various ways to monetize things in ways that would allow us to capture a fair share of the surplus. But, like, it’s very hot. Nobody wants us to put ads on LessWrong. There’s no way you could fund LessWrong via ads. The kind of value that LessWrong produces is extremely diffuse.
57:33
Microtransactions for content have historically famously failed for every online internet product that has tried them.
57:39
And the average LessWrong reader is the kind of person who has ad blockers on their browser anyway. That’s right.
57:43
Exactly. LessWrong has, I think, might be an industry leader in the fraction of users who have ad block enabled. LessWrong is actually, in some sense, building a multi-person team and investing serious efforts into LessWrong and making a jump to commit. I committed to it for five years when I started LessWrong. It was an enormous leap of faith.
58:02
It required extensive trust. in there being an ecosystem that cares about the things that I care about, that people would generously contribute to the continued operations of it.
58:12
Has that paid off as you’d hoped?
58:13
Definitely. Ultimately, I am very glad that I took the leap of faith on Lestrong. Both I can’t think of anything more impactful that I could have done. This is an extremely high bar. I fought really hard whether I would have rather chosen the career of any of my peers and friends.
58:31
like it’s not obvious the world is really hard to predict but making less wrong 2.0 happen reviving it creating just like very clearly the place in the world where like the best conversations are happening about ai and what it means for humanity’s features i think the best discussion quality in the world in terms of anything that
58:47
is as public as less wrong is I think it was just the single best thing that I’ve seen anyone do in the last five years on the things that I care about, which these days are very heavily routed through reducing existential risk from AI. And so I’m like, yeah, very, very glad about that choice, very solid choice.
59:04
And so even without the impact, we’ve been funded reasonably well. We are just in the middle of our kind of big light kind of infrastructure fundraiser. where for the next year, in order to survive, we need to raise approximately $3 million for all of the projects. And, you know, we have already raised $1 million.
59:19
Those are all from people who very generously, like there are no big, large philanthropists involved so far. Like it’s just people who just have used LessWrong, maybe occasionally have followed LessWrong’s investment advice and invested in crypto early or invested in NVIDIA early. And we’re like, yeah,
59:34
it makes a bunch of sense for me to pay back just kind of out of generosity. And I think that was definitely worth it. But it required very much a leave of faith and trust.
59:42
You brought up the fundraiser and the $3 million. And I do have more questions about Lighthaven. But since you brought it up right now, I figured I’d jump on that. Sure. Why $3 million? That seems like so many dollars for people who are not in the Bay Area. Yeah.
59:53
Yeah, I mean, very basic economics. We currently, LightCone has a staff of approximately eight full-time people. Well, not full-time people, eight core generalists. We could go into how LightCone is internally structured, but that basically means those are all eight competent software engineers, all would be capable of getting jobs in industry and have a very high opportunity
1:00:14
cost of time. That’s web development. We don’t just do less wrong. We also, for example, have historically run the backend for overcoming bias. We’ve done various maintenance work for Slater Codex or various things. And sometimes we’re like, as Litecoin infrastructure, the name implies, very much an infrastructure provider for many things going on.
1:00:30
But this is like eight full-time staff, the average salary of which historically has been approximately $150,000, $160,000. which is definitely a lot broadly in America, but it’s just like much, much less than any of the people involved could make in industry. And I think there’s a question of like,
1:00:45
what is a reasonable salary sacrifice to make if you want to work on this kind of philanthropic infrastructure? Historically, the policy that we have settled onto was to pay 70% of industry. And here we’re talking about 70% of like the non-highly leveraged, highly uncertain part of compensation, like not...
1:01:03
They had a 10% chance of joining OpenAI early, and so therefore you would now have $20 million. Just like, you know, what’s your solid salary if the stock compensation only counted part of the stock compensation that’s liquid? We’ve been at 70% as something like, in some sense,
1:01:17
you can think a bit out as like the giving what we can pledge as like one thing that is trying to establish a shelling point for what it makes sense to give, and that’s 10% of your salary. Mm-hmm. I think it makes a lot of sense for the people who are really going all in and who
1:01:29
are like building infrastructure and making this their full-term job to give more. But does it make sense for them to just give more than 30%, just give up so much of their wealth and of the opportunity costs and the amount of ability they would have to steer the world in directions that they wanted because
1:01:43
they work on things in the space. And I think the answer is yeah, a good amount, but probably not infinitely much. And so 70% of industry is kind of what we’ve settled in.
1:01:51
I think Steven has a follow-up question on the number of devs, because we talked about this briefly earlier.
1:01:56
Well, just one comment and then maybe a question. But the comment was, you mentioned kind of jokingly earlier, but $3,500 a month for a 100-room flat. I think it’s just important to contextualize how far money goes in that area. And 160 is, it’s a substantial sum, but it is not enough to live lavishly whatsoever.
1:02:17
Importantly, I think, yeah.
1:02:19
And the Bay Area is also solidly below the salary or just at the very edge of the salary of where you can have kind of what most Americans would consider a normal family life. A house here tends to cost between $1.5 and $2.5 million. It’s pretty hard to finance that, especially on a single income,
1:02:38
but even on a double income where you have one person making 150, the other person 50, maybe half time or something like that together with raising children. I think it’s very tight.
1:02:47
As someone who lived in Denver my entire life and has now lived in the Bay Area for two months, I am consistently surprised how much more expensive literally everything is. At least 20%, often 40% more expensive than it is in Denver. From groceries to gas to whatever, it’s just more expensive here.
1:03:03
And Denver, it’s not in the middle of rural nothing. It’s not the biggest metropolitan area. It’s not like you’re living in New York or LA or something, but it’s got decently median prices. So this place is expensive to live in.
1:03:16
So you said eight full-time devs.
1:03:18
So it’s important to be like, are they devs? I think one thing that’s kind of also I really want to emphasize, the same people who build LessWrong are the people who then for a year ended up coming every day to work, frequently putting on construction equipment. Not literally. It’s not like we assembled drywall every day,
1:03:38
but we managed a solid construction team of 15, 20 people over the course of years leading a multimillion-dollar construction project. And so I do really think the word generalist is appropriate in a sense that the way Litecoin operates is we try to structure us organizationally such that we are
1:03:53
capable of solving whatever problems we kind of identify our broader ecosystem to have, which historically has meant definitely requiring devskill. So many problems can be solved with programming and software engineering, but also has required the skill to manage other people, the skill to solve physical problems in the world, handle, in our case, construction,
1:04:11
handle the assembly of physical products we’ve launched the less wrong books and various other things and like really there’s a very deep generalist mindset and i do also think that means that the people that we’ve hired are just very competent in order to hire for that i’ve often described it as proudly described it as the
1:04:28
slowest growing ea organization that exists out there where we’ve just like consistently hired exactly one person per year for the years of our existence
1:04:37
How many people do you have actually working on LessWrong? Like full-time?
1:04:41
It varies quite a bit. It’s definitely the case that there might be a month in which this month we are going to be working on infrastructure for a different organization. Right now we’re working with AI Futures on Daniel Cocotello and his team. We’re making a website for them to present some of their ideas.
1:04:57
so it differs somewhere between three and a half four people and seven people it’s very hard to run less wrong with less than two people kind of when we go down to a skeleton crew two is the absolute minimum where if you have fewer than that things
1:05:09
just fall apart very quickly you just need to be on call if the site goes down bots and we have to deal with ddosses have to deal with bots have to deal with spam have to deal with moderation Two is kind of the absolute minimum to keep things at a steady state. And then, of course,
1:05:24
if you want to make improvements, add new features, really kind of improve things over time, you then talk about somewhere between two and six additional people.
1:05:32
You usually start work, like, it seems 10, 11...
1:05:36
Well, it’s confusing. Our time to be in the office is 10.30. At least me and Rafe, and I think most of the team, tends to start work around 9.30, but mostly from home.
1:05:46
Which is nuts, because that means you guys are working 12-hour days, usually.
1:05:49
We just had a long Slack conversation about what are the hour expectations at LightCount. Mostly when we’re very chill. I mostly care about where people get the work done. I was a bit like, look, I do think 60 hours a week is like the reasonable baseline expectation.
1:06:03
If people were only working 40, I think something would be going quite wrong. 55 to 65 is kind of roughly the the hours that I expect. It like massively differs with the kind of work. Like when I’m doing fundraising work and especially fundraising writing, like when I read my fundraiser,
1:06:17
like I would easily spend two and a half hours a day actually writing and the rest just like staring into nothingness, distracting myself, cycling through my infinite cycles of Facebook, Hacker News, Lathron, EA Forum, Twitter, Facebook, Hacker News, Lathron, EA Forum, Twitter.
1:06:34
As a person who has never worked at a startup and who has been pretty strict in my life about trying to keep my work hours to roughly 40 hours a week when I’m working for a corporation. And I mean, I worked in accounting,
1:06:46
so that meant some weeks I would work 20 hours a week and then other weeks I would work 58, 55 hours a week for it. So it averaged out to 40 hours a week. Is this normal for a startup? Why 55 to 60 hours a week?
1:06:56
It’s definitely normal. I mean, I think like I’ve known very few startups or small organizations in the Bay Area, at least it works substantially less. I think Paul Graham has an old essay that’s like you can choose exactly one thing that you can do next to work. If you run a startup, you can have a family,
1:07:14
you can have a hobby, you can you can do one thing, but not two things. And so my sense is, I don’t know. If you subtract the amount of time that one other thing takes and then all the remaining stuff gets to work, I think it usually ends up at around 60 hours. But it depends heavily.
1:07:26
I mean, like, I am one of those cursed people who needs nine hours of sleep a night. Otherwise, I cognitively deteriorate quickly. Wow. So for me, 60 hours is pretty costly. And more seems kind of hard to arrange. But, like, I know other people who only need six hours of sleep a night.
1:07:41
So if you’re working 12 hours and sleeping nine hours, that leaves only a few hours in weekdays, like what, maybe four hours for other life stuff?
1:07:51
That’s right. Okay. A thing that I think came up in our internal conversation about work hours was, does being on Slack count as work? At Lightcone, it definitely has to. Aaron Silverbook’s, I think, small booklet, it’s like the Adventures of the Lightcone team. He has a section where he describes the Lightcone hive mind,
1:08:07
where just everything we do rides through Slack. We have automations where our whole campus tends to route through Slack. When somebody rings the doorbell, what happens is that a message gets posted to Slack with a video of somebody ringing the doorbell and why they want to enter campus.
1:08:19
When we receive a donation, that’s a message that goes into Slack. When we receive an application for a new event that runs here, that goes into Slack. When somebody applies on our job application page, that goes to Slack. When a large bank transfer happens,
1:08:32
that goes to Slack so I can review it and understand whether that is an expected movement or not. So yeah, kind of everything constantly lives on Slack. And so that creates this environment, which I think for many people, it is natural to spend some fraction of the day outside of work, like being active on Slack, chatting,
1:08:50
talking to other people about strategy. But it’s like you’re not necessarily fully in the office, but it definitely is quite a lot. we definitely have like about once every two weeks or once every three weeks we tend to have some long slack conversation until 1am into the night what could you
1:09:03
do if would you even want an ai to like have full access to your slack and act as like a slack brain
1:09:10
So we’ve considered it. So I actually, as soon as I think it was the first version of GPT-4 came out, we had a hackathon here as one of the first events we ran at Lighthaven where the project that I was working on was basically like a Slack summarizer and like a
1:09:24
thing that like tries to keep everyone up to date in Slack. Very annoyingly, we haven’t been able to try out any of the, like Slack now has AI features and it can like give you summaries of what happened in different channels and stuff. The annoying problem is that it costs $10 per user a month.
1:09:40
And one of the other things that we do is all of our client relationships are also on Slack. And so we have 400 members, 700 members, like something between those like active members. Because, like, every single person who historically worked from the Litecoin offices works from Lighthaven and, like, Stacia, everyone who’s, like, visiting,
1:09:58
sometimes conferences create channels for things. I don’t really want to pay the $10 a month for each one of those users. So I have no idea how good they are. AI is, but I’ve considered just creating a fully new Slack just so I get to try out the Slack AI features. I think it could be huge,
1:10:13
but it’s kind of messy because just like summarization in AI is I think I’ve been very disappointed and I think I know why, but I’ve still been quite disappointed about how good summarization has been in almost any context. I don’t get any values from AI summaries of papers. I don’t get any values.
1:10:30
We’ve experimented on Let’s Wrong very heavily with, can we give people AI summaries of posts? AI tutors can work quite well. Somebody asks an AI about a specific question or topic that they care about. But something about the summarization step just seems to not really work.
1:10:45
You describe what the article is about in a way that gets rid of all the content of the article. You end up with a description of something like, and then they made an argument about the risks from artificial intelligence. But that’s completely useless. I get zero valuable information from knowing that an argument was made.
1:11:04
And so summaries often, maybe it’s a prompting issue, maybe there’s various things you can do, but at least in my experience, they kind of replace the substance with a description of what the substance is about, which often is kind of useless. And I have similar experiences with Slack where just like it really isn’t that
1:11:19
helpful to know that like there was a discussion in a channel about who should be responsible if the site has an outage at 2 a.m. Like, OK, I guess it is useful to know that a topic was discussed. But of course, the thing that matters is what did people argue about?
1:11:36
What are the arguments that people brought forward? Where did things settle in terms of the conclusions and various things like this? And so at least in that context, I haven’t really been able to get things to work. But I think it might just be a prompting thing. It might be various different modalities that you want to use.
1:11:49
Okay. I was thinking that if catching up on Slack threads counts as some work hours, then that 60 hours a week doesn’t sound quite as daunting. In my last job, I was told by my manager, never work past five. She would take a nap during the day because she’s been working from home for 20 years.
1:12:05
It’s just how to stay sane, I guess, with the work-life balance when your work is also your home.
1:12:13
yeah that that kind of makes sense to me i i cannot work from home i think the only reason why i survived the pandemic is because i was in a group house with my co-workers so that allowed me to keep working i think i would go i would become sad
1:12:26
very very quickly if i was like in an apartment like working from home
1:12:30
Yeah, it’s definitely an adjustment. There’s a lot of pros and cons. I mean, pro, the commute is great. But the cons is it’s really hard to get just water cooler chat. And is it all self-hosted so that all the keys to the kingdom and everything stays in the hands of people that you know?
1:12:49
Or is it because other technological solutions won’t fit the bill? Nothing else is quite out there less wrong, but you mentioned you’re working on a project for another company. Couldn’t they just use, you know, Squarespace? What sort of extra stuff do you guys bring to the table? I’m just asking out of curiosity.
1:13:04
I’m not like trying to like, this is not me nickel and diming the bill, by the way. I’m just curious.
1:13:09
yeah i mean i think one of the big things is design first of all for less wrong itself we started out kind of a tiny bit forking in existing uh open source forum repository that’s kind of how less wrong got started but by now you know 99 of the
1:13:22
code is written by us and it’s just because there doesn’t exist anything out there on the internet that does anything like that that you could adapt um old less wrong was a fork of reddit and that in some sense was a huge stroke of
1:13:33
luck reddit was open source for a short time before they then decided to do feature versions closed source a substantial chunk of less wrong’s initial death was downstream of having made that choice where you relied on reddit being open source but then no longer being updated and then kind of becoming
1:13:49
dysfunctional and being a code base that nobody who was then working on less wrong had been involved in developing and so there wasn’t anything else like that and kind of the stroke of luck of reddit being open source for even just a short time
1:14:02
was in some sense the thing that enabled less wrong 1.0 and so getting to do something as complicated and rich and complex as reddit is of course a huge amount of work reddit the original versions of course were built by only a few deaths but these days has
1:14:17
us got hundreds thousands of deaths working on it i think another thing to really understand at least in less wrong context to web development less wrong in many ways i don’t think it’s like fully true but i think in many ways is the last forum
1:14:31
of its kind the era of independent web forums is long over social media has consumed the vast vast majority of the web And of course, the reason for that is it is very hard to compete as just someone who runs a PHP BB forums or something like that with the kind of features and the kind of
1:14:47
expectations that a modern consumer has for what they expect out of a discussion platform and a social media platform. While there was a period of the internet where you could run a website that can compete with the attention mills of the rest of the internet, just paying a random forum provider a tiny bit,
1:15:01
the modern internet de facto actually has basically lost that race. Very few forums are actively growing. And if they’re growing, they’re growing around specific niches and are kind of on an up and down trend. Everything is conglomerating around the social media. The fact that LessWrong 2.0 has successfully been growing for kind of the last five
1:15:18
years is a huge reversal to basically the whole trend of the rest of the Internet. And I think a lot of that is just the fact that like we are bringing to bear even just a tiny snippet of the development resources that the large social media
1:15:31
platforms on which most of the internet happens these days are bringing to bear on these problems. You know, people expect good email subscriptions. People expect everything to work on mobile. People expect that you can read out posts out loud while having them transcribe with AI narration these days. People expect things to work smoothly.
1:15:47
Things expect all the design to be intuitive and clean and nice. And if you don’t have those things, I do think you are just losing against a lot of the big social media. So I think that kind of is a reference class of like where the internet is actually at and the competition that we’re facing.
1:16:01
And I do think a huge trend, a huge thing that has bugged that trend is hacker news, which hasn’t changed very much, still extremely successful. So I think I’m just mentioning that briefly as a counter example. I recognize it and I think there’s some interesting dynamics there, but I’m not going to go into that for now.
1:16:14
yeah when we have things like external clients i do think the key thing that we tend to bring is just design taste it’s just very hard to design complicated websites there exist many companies out there you know you can hire designer you want to have a company website just a thing that displays your brand and your
1:16:30
identity reasonably well there are tons of websites and companies out there that can do that But I think the thing that we have really gotten world-class at over the last six years is the ability to take extremely complicated and dense information and technical arguments and textual conceptualizations and combine that with elegant design and intuitiveness and accessibility.
1:16:49
which just like really doesn’t exist very much out there the kind of thing that we’ve been working on with air futures they’ve been trying to tell kind of a narrativized story about what the world will look like between now and the time when the team roughly expects agi to be present in the domain of catastrophically
1:17:05
risky which they think is roughly going to be at the end of 2027 very soon And in order to kind of convey the groundedness of that experience, a thing that we’ve been working a lot on is to integrate graphs and underlying data very deeply into the storytelling. So right now,
1:17:21
if you were to go to our draft website, you would scroll down and there would kind of be next to the short story that is being told that tries to narrativize the rough scenario that you’re outlining, there would be an interactive dashboard of data and graphs.
1:17:32
that shows you who are the current leaders in the relevant AI race, how much is the world currently spending on GPUs, how much of present cutting-edge R&D research is being performed by AI systems versus human systems, and trying to kind of create a UI that combines the ability of just like there’s an
1:17:49
engaging story and a narrativization that you want to read so you can absorb the relevant information with this highly complicated data display is the kind of thing that I think we really excel at because in the context of LessWrong, we’ve been trying to build UI for an extremely widely varying set of very highly
1:18:04
complicated technical and abstract and difficult and complicated explanations for many years and have been dealing with just a huge number of UI challenges of trying to create something that’s both dense and beautiful and capable of expressing these complicated technical arguments.
1:18:17
So I have two questions on this, which hopefully the first one is the faster one. Do these other companies that you work for, like AI Futures, do they pay for the services? That’s right, yeah. Okay, okay.
1:18:26
We just signed a contract with Miri to take all the orbital.com content and translate it into LessWrong and integrate it deeply into LessWrong. which was just a nice positive sum trade for both of us. As a small piece of context, Eliezer founded LessWrong 1.0. Then, I don’t know, mostly got tired of moderating a giant online forum,
1:18:46
a concern I have no sympathy for whatsoever. And then kind of started writing in various other places, of course, occasionally still posting to LessWrong, and he finished HP More and various things like this. but then kind of wanted to have big new online discussion dreams in 2016, 2017,
1:19:02
when he recruited a bunch of people to work on a project called orbital.com, which in some sense was trying to be a successor to Wikipedia with a somewhat more broader scope focused around arguments, not just factual statements. And I actually worked there very briefly after I quit CEA before the company shut down.
1:19:18
I think it was a very, very hard project. I think people tried pretty hard. I think in the end, a bunch of people burned out, roughly gave up, and it fell into abandonment. But during the time when Eliza was very involved, he wrote many of the best explanations and essays of core rationality concepts.
1:19:33
and core concepts in AI safety on orbital.com that have been incrementally getting less and less accessible every year as the underlying website rots away and their performance issues and the page doesn’t load and like 15 to 20 percent of page loads and sometimes is completely inaccessible and so now we finally have the
1:19:50
opportunity to like take that content and really bring it home where home is less wrong. Yeah, and so I’m very glad about that. And that includes things like the best version of Eliezer’s base guide, which is like a good solid introduction to base theorem, but also includes just like good explanations of instrumental convergence and
1:20:05
existential risk and various other things like this that I’ve referenced many times over the years and I think are really good to have kind of integrated into the site. but yeah they pay us that’s really nice and so there’s been a broader thing kind of
1:20:16
related to the earlier statement about less wrong being a high trust thing and also me kind of implying that in some sense light haven is a lower trust thing like requiring less trust in our ecosystem where indeed in the intervening years since fdx’s collapse we have been trying pretty hard to move
1:20:32
towards a relationship to the rest of the world where things are somewhat lower trust things are a bit more contractual we negotiate about getting a fair share of the surplus in advance the best as opposed to hoping that just things magically
1:20:43
work out in the end um and i think that’s been good i think it like has made things
1:20:46
feel more grounded and more robust okay when is orbital going to be available do you know or all the orbital stuff on on lesser on
1:20:53
I mean, I think every software engineer will tell you that your forecasts of how long feature development will take will reliably be off by a factor of two, even after you have multiplied the previous number by a factor of two, no matter how many times you have done it. So it’s very hard.
1:21:10
But my best guess is, I don’t know, late January. Oh, soon. That’s right. Definitely not very long.
1:21:16
At an upper bound, we’ll say late January 2026.
1:21:17
Right. What would happen to LessWrong if the entire Lightcone team got frozen in time for a month?
1:21:25
I’m not sure. But right now, for example, we’re dealing with a DDoS-like situation. Maybe even once every two weeks. We have someone with a very aggressive botnet or a very aggressive crawler who is being very unfriendly. For a while, those were our friends at Entropic, for example.
1:21:45
We had this very sad situation where I think I care a lot about content on LessWrong. being used to train language models i think it is good i think in as much as i hope that these systems will end up being useful for doing air safety and ailment
1:21:57
research i want them to know the content unless wrong but entropic in particular is being very impolite with the crawlers where we have a robots.txt that tells the crawlers that come to our website about how frequently they are allowed to request it what their backup period should be
1:22:11
The Entropic Crawler completely ignored all of that, took our site down like five times. It caused me to wake up in the middle of night at least two or three times, needing to deal with an enormous number of requests and then being like, I don’t want to block you, Claudebot, but you really are not being very nice.
1:22:28
But that means that just like, LessWrong is a very popular website. And so compared to Orbital, it has much, much more of a target on its back. And so the amount of crawling, the amount of traffic that it gets, the amount of just people who have fun trying to hack us occasionally from time to time,
1:22:43
we are not robust to hacks. I think a dedicated attacker could easily hack into LessWrong and you know there’s a non there’s some amount of background level like script kitties and people who will try to attack it and i do think that just means that a month i
1:22:57
think probably the site will still be up but i’m only 75 to 80 percent confident wow the modern internet is a kind of harsh place
1:23:04
So why $3 million then? You said you had the eight devs.
1:23:08
That’s right. So there’s some amount of salaries, which is between $1 to $1.5 million. Then expanding a bit on what the deal of Lighthaven is. We then renovated this place. We did that all via getting a mortgage. Beautiful, standard financial instrument. no particularly large grants were involved we just took out a mortgage on the
1:23:29
property the mortgage was facilitated by like Jan Talen who is a philanthropist but ultimately it was structured as just a normal mortgage that means we purchased this property and renovated it for roughly 20 million dollars we invested a bit more of our own money into it And that means now we pay interest once a year.
1:23:44
We pay one million dollars in interest on our mortgage here. And of course, it takes quite a lot of money to run this place. And revenue wise, Lighthaven is like making about two point last year, made about two point three million dollars a year.
1:23:56
And my best guess next year will make around two point seven, two point eight million dollars. a very quite high revenue operation but importantly kind of the cost that we are currently still paying off and where some of the three million dollars comes from
1:24:09
is just that there is a natural time it takes for a business like light haven to ramp up i think normally in the industry like light haven is a weird place so can you just take hospitality industry things as reference but usually the revenue at a hospitality roughly ramps up somewhat sigmoid but mostly looking linear over the
1:24:28
course of three years and a relatively smooth slope from roughly zero revenue to 100% of future expected revenue over the course of three years. We’re roughly a year in. I think we’re doing substantially better than that. We’re substantially ahead of schedule of what normal utilization rates for hospitality industry are.
1:24:44
But that just means that we are at not maximum revenue. And so over the last year, that kind of meant that we roughly lost on the order of a million dollars in total on running Lighthaven and the relevant operations, mostly in the form of kind of paying our interest. And because we didn’t have the money,
1:25:01
we’ve also had a very hard and fun time fundraising as FTX sued us at the beginning of this year, trying to get the money back that FTX had given us, which is a whole fun story I could also go into. But because we weren’t capable of fundraising during that time, we’re really running down all of our reserves.
1:25:18
And we went all the way down to the wire, basically $0 in a bank account, me basically loaning all my personal money to Lightcone, 100% of my personal assets, most of our staff reducing their salaries to the minimum that felt comfortable at all living in the Bay Area, me completely forfeiting my salary. Wow.
1:25:36
As a part of that, we negotiated with Jan who gave us our mortgage to delay our mortgage payments by four months because we couldn’t otherwise make it, wouldn’t have basically just gone bankrupt. And so that means next year is particularly annoying in the sense that we have two mortgage payments due.
1:25:51
So we have a mortgage payment due in March and we have a mortgage payment due in November, each one for $1 million. Actually, overall, my best guess is if you just take the next 12 months, ignoring the additional mortgage, Lighthaven is actually quite close to breaking even, like within 100K or something in total expenses and revenue.
1:26:09
And then my hope is next year will then actively be a revenue source and allows us to fund things like LessWrong. But that means next year kind of has this additional $1 million deferred interest payment. so if you kind of look at the costs for why we need to raise three million dollars
1:26:23
it’s roughly like one to 1.5 is like less wrong costs and associated project costs approximately one million dollars in a deferred interest payment and then approximately like a hundred to five hundred k shortfall of light haven revenue Plus, maybe if we can make an additional hire, grow the organization a bit,
1:26:45
such that eventually we can both increase Lighthaven revenue and do more great things with LessWrong and various things like this. And so that’s a rough breakdown.
1:26:51
So ultimately, it’s part of the hope that Lighthaven will help fund LessWrong?
1:26:56
Well, it’s messy. As I was sitting there on November 8th, two days before it looked like FTX was collapsing, but the chips had not yet fully fallen, and I was deciding whether to put my final signature upon a purchase contract, I was thinking about what relationship I wanted to have to my environment and the
1:27:13
ecosystem that I was part of. And I do think at that moment, the fact that this was a business, the fact that ultimately Lighthaven, I guess in economics you would call it a club good, It is pretty excludable. We can build great things at Lighthaven,
1:27:28
but it is relatively easy to build great things here and then charge the people who want to benefit from those great things to get like get a part of the surplus. I absolutely did not expect at that point in time and before that point in time that we would be able to make Lighthaven break even.
1:27:46
That was not the goal. I was not expecting that we would be able to pull that off, especially not within a short of a time period as we de facto did, because just like that’s not why we renovated it. That’s not the firmies that we did.
1:27:56
The firmies that we did were about how much value can we generate? How much excitement is there among our funders for doing that? And we had more than just, you know, FDX support. FDX basically supported us for Lighthaven, not at all. Our primary support had been for Open Philanthropy and other funders in the space.
1:28:11
And so we had substantial support from other funders. And so we were doing this on impact grants. Because, I mean, honestly, I think it is because we did a really, really good job with Lighthaven that now we just have conferences from kind of all
1:28:26
over the world and programs from all over the world that just want to run things here because we’ve actually just built something world class that allows us to then make enough revenue to actually get anywhere close to breaking even and potentially subsidize other things.
1:28:40
But I think it would be lying if I believed in that any time before like six months ago or eight months ago.
1:28:45
I’ve talked to some people I’ve been here, at least one who said perhaps the most impactful week of his entire life was here at Lighthaven. This has been hugely impactful for me. It’s one of the reasons I’m moving to the Bay Area. This place is amazing. I guess final question in this vein,
1:29:01
I have seen at least one person somewhere say, is there any way I can donate just to LessWrong and not to Lighthaven? Because I am nowhere, I’m not in America, I will never benefit from this directly, but I really like LessWrong. Is that possible and why not?
1:29:15
So, I mean, ultimately, like, you can give us an earmarked donation. I’ve always felt that earmarked donations kind of feel very fake to me. If you give me an earmarked donation, I will put it in my bank account and be like, thank you for giving us $1,000. It’s really very greatly appreciated.
1:29:30
I’ll make sure we spend it on Lesserung. And then literally tomorrow when my AWS builds come in, I will take those $1,000 that I’ve tagged and I will pay it on the AWS bill. And now I have no earmarked funds anymore. I have changed no actions.
1:29:45
I have not done anything differently than what I would have done if you had donated money to us in a generalist way. Because of course I want to first spend the money that has the most constraints, that has the most strings attached. Kind of a very basic principle of rationality is that rarely...
1:29:59
unless you’re in weird adversarial environments like increasing the action space giving yourself more optionality is good for you and so of course i’m going to attach spend the money that has the most strings attached but that means that in order for something like a earmark donation to have an effect on us that i think
1:30:15
makes any sense we need to be talking about solid fractions of our budget Even if we’re looking at the relative budgets of like LessWrong $1 million, Lighthaven maybe not $1 million, I need to start being in the space where I have on the order of $1 million
1:30:29
earmarked for LessWrong in order for those not to just funge away. It doesn’t matter if I have $500,000 earmarked for LessWrong and $500,000 earmarked for Lighthaven, nothing changes. I will take no different actions. It’s kind of interesting. Should I, when somebody donates to LessWrong, should I be like, I will spend an additional $1,000 on LessWrong?
1:30:49
That doesn’t really make sense. In general, like, I am not a service provider. You don’t pay me money. I’m a charity. You’re giving me money because you want me to do more of the things that I’m doing. I don’t really just, like, take the commands of my donors.
1:31:03
But you can imagine a world where maybe we should have a relationship to donors that is a bit more like they donate a thousand dollars to us. And, you know, we’re not going to, like, just split the money exactly proportional, but I will try a bit harder to work on the things that the people who support us
1:31:16
care about. And I do think that works. If you donate to us and you send me a message being like, look, do more unless wrong. I think that’s just the thing that ultimately ends up being most impactful and will work out, and it’s the thing that I care most about. I will take that into account.
1:31:30
But I have no real mechanism that I can currently think of where that becomes a binding promise, where that works out in accounting terms in a straightforward way. But, you know, if you donate to us and you want to donate to LessWrong in particular, you can do that.
1:31:43
Defunding is a difficult constraint that I think on a hard power level means that that will not really matter. But I will, of course, listen to that. But also, I don’t know, man, last year, early to mid 2023, last time we fundraised. All of our donors were like, what are you doing with this crazy Lighthaven shit?
1:32:01
Just build less wrong. What? A $20 million hotel project in Berkeley? There’s no way that works out. The exact same donors, like Emmett Scheer, donated to us in 2023 $50,000 and was like, I don’t really get this Lighthaven stuff. Well, you know, at the time, I don’t really get this campus stuff.
1:32:18
LessWrong seems like a great public good. Here’s some money for LessWrong. This year, he was like, I want to specifically give you money for Lighthaven. It’s the best project that anyone has created in, like, a really long time and obviously seems highly impactful for the world.
1:32:33
So, like, you know, for many of our donors, they were just wrong. Maybe if you give us money and don’t earmark it as specific, you will be surprised by the amount of impact we can create by doing weird stuff. Because also, nobody believes in LessWrong 2.0. Like, when I fundraised for LessWrong 2.0 initially...
1:32:47
I also just had people being like, that seems completely crazy. Why do you think you can revive this dying web forum in which downvotes are disabled because every time we enable downvotes, all of our core contributors lose 15,000 karma because of spammers and trolls. Yes, you can donate to us, earmarked. I don’t think it does anything.
1:33:04
I will take people’s references into account. Also, I don’t know, consider that you’re wrong and maybe you should just trust us. But also, I don’t know, don’t trust us blindly. But I do think we have a pretty good track record of like doing cool stuff that
1:33:19
looks like a bad idea to other people and then turns out to be a good idea.
1:33:23
I meant to ask earlier, and it kind of reminded me in what you just said. Do you have a rough estimate of how many weekly or monthly views Lesseron gets?
1:33:32
Yeah, so Lesseron gets between 1 to 2 million views a month. So total viewership tends to be between like 15 and 25 million. It’s pretty spiky and pretty complicated. Sometimes there’s a thing that goes super viral, and then we have like 4 million views a month.
1:33:45
and sometimes it’s a chill month like December tends to usually be kind of lower and it’s only like 700,000 or something in that range and it’s all logged out so many of those sessions are just like people reading things for the first time kind
1:33:56
of no context and then it’s more on the order of like 10,000 20,000 logged in active users who like come in more than once a week and more like 30,000 who come in like once a month or something in that range awesome
1:34:10
With Lighthaven being the physical forum for all these ideas, which it just now occurs to me is a great counterpart to LessWrong being the online forum, but with Lighthaven being the physical forum for these ideas to bring people together to get this intellectual movement really producing things in the real world,
1:34:27
do you think it’s met your expectations in that regard? And am I correct in summarizing that that is kind of what you were going for?
1:34:34
This is kind of related of the existential crisis I went through after FTX. The story that I’m hoping for with Lighthaven is my fundraising post I described as a bit as a river and shore metaphor or something. The thing that I feel excited about is building a culture of really competent,
1:34:50
smart people who are thinking about the world’s most important problems and doing so in like a way where they genuinely benefit from like each other and get to learn from each other i think that takes time one of the things that i’ve updated away from was oh if i just do something that has the rationalist flag that
1:35:04
has the ea flag has a long-termist flag has the air safety flag then the thing that will result will have the necessary internal structure and internal components and internal like organization to enable that kind of functional institution society organism Instead, I’m just like, I think we need to build it.
1:35:20
And I think if you want to build it and kind of make sure the structure is in place, you want to grow much more slowly. I do now happen to have a property that’s like 20,000 indoor square feet and 30,000 outdoor square feet. So I don’t really want to just have 95% of it sitting alone.
1:35:35
But also there’s a question of how do you find the people that I want to build this community out of and this set of contributors out of. And I think those actually combine very well where the way I’m thinking about it is kind of as a river and a shore where you have these people coming through,
1:35:47
where we have these programs that run. We host the Maths Fellowship, Machine Learning Alignment Theory Scholars program that runs here twice a year. Maths brings in a lot of very smart people who just care about a lot of the stuff that we care about.
1:36:01
And we get to know them and we get to interface with them while they have their program here. And the people we host also get to interface with them and often talk to the fellows. And then the people know us, and then I think some of the most promising people are then interested in staying
1:36:13
here and working here and becoming more integrated in the things that we’re trying to build, and we’re building those relationships. And similarly, we have these conferences, like we have the Progress Forum Conference, we have Manifest, we have Lesson Line, where hundreds of people come here for a weekend or, you know, a few weekdays.
1:36:26
And we get to meet all of them. We get to build the relationships, build the connections. And my hope is kind of slowly over the course of a few years, are we going to take up more and more of the space with kind of a relatively tight-knit group of researchers and writers and entrepreneurs.
1:36:40
And right now, you know, we have like 15, 20 people in that reference class. but i hope it will be more and grow over time but that means it’s hard to say whether is light haven working in a relevant sense and i’m like well we’re at a
1:36:49
very early stage right now our priority has been to get the river going and i don’t know yet so far i feel very good about all the choices we have made of who we’ve been hosting here and who has been around but it’s only a few people and i think in
1:37:03
as much as we will forever be in the state as we’re currently in i think we feel kind of disappointed Of course, unless we find something else that’s more important. I’m totally fine with a world where I build a great conference center that provides infrastructure for a lot of people that I care about,
1:37:16
even if it doesn’t necessarily grow into the perfect ecosystem and machine for intellectual progress that I care about, if I then find something even cooler to do. Maybe we’ll start working a lot on AI-driven research tools, unless wrong, and then LightCone is just very busy with that, and LightHaven runs along and funds our other operations. Also great.
1:37:35
Seems like we’re providing lots of value to lots of people.
1:37:37
So you’ve laid out what the plan is and the need and how the situation arose for your current fundraising goal. Is there an expectation that next year you guys will be balanced or will you guys also be looking for donations end of next year?
1:37:51
My hope is that next year we only need to fundraise approximately $2 million. And then my hope is the year after that, unless we, of course, start new adventures, that we will only need to fundraise approximately $1.7 something in that range as things ramp up more, which is a lot less. It’s quite nice.
1:38:08
I love my expenses going down over a year as opposed to up every year. And as I said, kind of the less wrong component of that is actually the hardest because… I have no idea how to monetize less wrong. I’ve fought many different options. We could have a merch store.
1:38:23
I have a sense that a merch store isn’t going to drive a million dollars of revenue a year. But we could have some really cool merch. We sell books. We make about $20,000, $30,000 on books if we really got the crank running and Amazon were to reactivate my Amazon Seller Central account.
1:38:40
I think one one thousandth of that was me when you guys launched your first collection of books.
1:38:46
I got hopefully just a few more questions because we’ve been going for quite a while here.
1:38:51
I mean, I guess first of all… I told you when I am. When you said you were going to be dead, I didn’t see that as a necessary obstacle. I thought Steven here would have my back. I thought that Ineash might be dead, but I would have another few hours going with Steven,
1:39:06
I guess. He’s in a later time zone, so yeah. What happens if the funding goals aren’t met?
1:39:13
Yeah, it’s really not nice. So if we raise less than $2 million, I think we’re kind of fucked. We have two mortgage payments too. If we don’t make them, the thing that happens by the contractual agreements of the mortgage payments is that we enter default. When default happens, it’s a bit complicated.
1:39:29
There’s a bit of settling up to be done. I think probably Lightcon as an institution survives. I don’t think it necessarily has to go fully bankrupt, but we now have to sell Lighthaven, figure out how to sell it. Selling it alone is a huge effort that would probably consume six months to a year
1:39:45
of like two or three of my staff. Because just like you have to renovate a bunch of things in directions that are more commercially viable. That’s kind of just hard to revert for an external buyer because they know the property less well. And then you have to find a buyer and it’s extremely weird property.
1:39:59
So like you’re going to kind of lose $500,000 to a million dollars in just like staff time. And then also, you know, all the effort and all the work that we have done into the property gets lost. And it would also end up being a huge morale hurt for my organization. My guess is we would survive.
1:40:13
We were a pretty tight-knit crew. I also, of course, think it would majorly hurt our relationship to Jan, who gave us a mortgage, who would probably end up taking a loss on the mortgage, given that we can’t fully fulfill the relevant obligations. And so therefore, who knows what would happen to our future philanthropic funding in that respect.
1:40:30
And I think he would have a lot of justified frustration Yeah, I mean, that would be quite bad. I do think 2 million is kind of a place where, as a result of kind of the Lighthaven project, one of the things that we did is we purchased one of the houses.
1:40:42
There were kind of five houses around the courtyard that made up the property that we purchased. And then also there was another house that happened to come into the market at almost exactly the same time, right next door, that was on the market for roughly $1 million that we also purchased. That one we didn’t purchase with mortgage.
1:40:57
And so we currently own it fully in cash. We’ve been trying to take out a mortgage on it so that we can just finance that as opposed to needing, you know, this year deal with such a large shortfall. Banks are very, very confused about loaning us money. And so it’s very annoying. Also, kind of hilariously, banks,
1:41:15
like we’ve been trying to talk to multiple banks since the financial crisis. They have these very interesting internal divides that make it very hard to explain to them non-standard business agreements. I think there was a lot of problems in the housing crisis where they had too much judgment.
1:41:31
And so one of the things that seems to have happened, I’ve been talking to people on a, like I was on a call with a guy from a bank who we were arranging a mortgage with and we were almost all the way through. And then they were like, hey,
1:41:41
maybe the deal is going to fall through because the assessor that we sent over said that the house is not currently ready to have two people sleep in and be ready for a single family. And I’m like, no shit. It’s used as an event space for a conference venue, as I told you.
1:41:57
And then he was like, well, I understand. However, the base is zoned as a single family home. And ever since 2008, I am not allowed to talk to the assessor who comes to your house. Whoa. Whoa. And so it’s been fun to take out a mortgage on. My best guess is we would do it somehow.
1:42:27
Like if it really came down to the wire, I think we would somehow figure out how to take out a mortgage. We might end up doing it at elevated rates. And in some sense, that really sucks too. Like if we have a million dollar mortgage, like if we take a house, we take out $800,000,
1:42:40
a million dollars on a mortgage, and we get subpar rates, 11%, 10%, something in that range, which is not implausible if we have no other choice. and we sure do look like an organization that is about to run out of money because it found fulfilled stable obligation and so should probably be given higher rates
1:42:56
for the interest payments now you’re talking about an additional hundred thousand dollars we have to pay each year in interest payments alone plus an additional hundred thousand dollars we probably have to pay in principle at least and so now you’re talking about well two hundred thousand more dollars we have to come up
1:43:08
every year if we fall short of the one million dollars that sucks not great and so two million dollars is kind of the cliff if we meet that i think we will survive but you know long-term burn rate will i think be kind of less good and somewhat suboptimal three million dollars is where we’re good less
1:43:24
than two million dollars you know probably will survive at 1.8 at 1.5 i think we’re in a really really tricky spot um
1:43:31
And you said you’re about 1 million right now? That’s right. Okay.
1:43:34
Thank God we’re halfway there. Yeah, I agree. I’ve got to say, I mean, I’ve loved Lesserong and the community coming on 15 years. I loved Lighthaven when I was there and I planned to visit again. Well, I actually was on the fence before. I haven’t donated yet. I didn’t know...
1:43:47
Well, to be perfectly frank, you know, I didn’t know exactly where the need came from. I wasn’t, you know, I should explain that it was, you know, something to do with the FTX kerfuffle. And I was like, well, how will they survive? You know, after this, is this just going to be a perpetual thing?
1:44:02
And I think, and of course, you know, why $3 million? And you’ve given really satisfactory answers to both. And I’ll be, I’ll be donating before the end of the year. Yeah. I encourage anyone who has the means to help out as well.
1:44:14
I think that you guys are doing really important work and I am running out of ways to say that without just repeating myself.
1:44:21
Oh, thank you. Also, if you donate to us because we have a physical campus, you can get things named after you. You can dedicate objects to whatever virtue of rationality you care most about. it doesn’t have to be your name we’ve had people who were like please name the
1:44:36
power adapter of the electric keyboard in your central room after us because last time I tried to play the keyboard it was missing so if you name it after me you have to find it first And so next time I come visit, I will be able to play it. Awesome.
1:44:51
So, you know, you can use your naming rights. You can go wild. I really like the idea of just, like, there’s a physical space. I think physical spaces really, as opposed to, in many ways, web design actually being very constraining in that way, leave an amazing amount of surface area to pay respects and add detail.
1:45:07
that just pays homage to the people who made the place possible. And so if you donate to us, you two can name the power adapter of a Yamaha keyboard after you. Or if you’re more boring, you can get a bench.
1:45:20
So two things on this. If you’re more adventurous, I know someone has gotten an entire garden section named after him for donating a very large amount. It’s a gorgeous place to have named after you forever. I’ve heard people compare this place to, like, the Academy in Ancient Athens or the Bell Labs and, like,
1:45:38
that this is how this place may be remembered in the future. And just having your name on some part of it is amazing, especially when it’s the goddamn gardens. We… announced in our last episode we recorded it just as the fundraiser was starting
1:45:51
that for a thousand dollars you could get your name on a bench that sold out so freaking quickly that you guys had to up it to two thousand right yeah we ran out of benches but you can still like get your names on other things um is this two thousand now for physical objects okay cool
1:46:05
But also, you know, if you donate less and you have a specific… I mean, we were considering going full libertarianism microeconomics. Like, you know, $50, plastic knife. Put it in a laser printer, cut it. It’s one use, you know.
1:46:23
so yeah many options here for um but you can have things named after you and just be here for i don’t even know how long as long as the plaque lasts i guess indeed lighthaven used to get more funding from philanthropic organizations and i heard at
1:46:35
least a little bit of speculation that okay this place is amazing i was talking with someone else like one of the reasons it’s amazing is because it is like less wrong in a physical space in that ideas are taken seriously. What people care about isn’t your particular position on an issue.
1:46:51
They care about whether you really want to know the truth, whether you’re genuine about that and you’re curious about things. And as long as you are, people are willing to engage with you and ask you things and talk with you, even if your ideas are kind of non-standard.
1:47:04
I guess heterodox is a term that is often used nowadays. I heard at least someone speculating that you may have received less funding from philanthropic organizations because you tolerate, at Manifest and less online, you tolerate people that are considered not palatable to more left-wing interests. Do you think there’s any truth to that?
1:47:22
Yeah, I mean, totally. So historically, we’ve been funded a lot by open philanthropy. I think the very straightforward thing that happened is roughly that, like Dustin Moskovitz, the biggest funder for philanthropy, became very involved in US politics around 2020. I think for kind of reasonable reasons, he was voted by a Trump presidency and become quite involved.
1:47:40
I think that context kind of started experiencing a lot more pressures from various political parties. and from various political forces in silicon valley and also funk himself kind of started identifying more as part of the left uh leaning political alliance and blue tribe and various things like this kind of in 2024 like a few months ago things
1:47:58
really came to a head and he kind of told the open philanthropy staff that he had been doing most of his giving through like he runs he’s you know he co-founded facebook has many billions of dollars has historically deferred very extensively to open philanthropy,
1:48:12
but basically he told open philanthropy that a lot of stuff that felt like wasn’t really his tribe, wasn’t really representing his identity appropriately, felt to him like it was doing things that didn’t resonate with how he wanted people to relate to the world. explicitly said things in the reference class of being high decouplers,
1:48:30
being interested in evaluating ideas on its own merits, without necessarily needing to take into account or take the fact that you’re considering an idea as a social signal about endorsing the consequences of that idea being taken seriously in the rest of the world, kind of being a high decoupler.
1:48:46
as something that they find very problematic in a lot of the rationality and the EA community and various parts of that. And yeah, I think he really disliked Manifest hosting Richard Anania and a few other people who are vaguely right-associated. I think he also really, really dislikes anything vaguely associated with the Tealosphere,
1:49:03
kind of in a previous description of Peter Thiel. I think he’s kind of a relatively influential group in Silicon Valley. And I have my own critiques of Thiel, but I definitely… find many of his years quite interesting and have long crowded myself on Lesserung being the kind of place where ideas from the telosphere and ideas from many
1:49:20
different parts of the internet that are generally quite heterodox can be discussed and where Lightcaven can host those things and yeah I’m quite confident that that played a non-trivial role in basically Dustin telling Open Philanthropy that they could no longer make any grants to us and more broadly the more heterodox parts of
1:49:36
like the extended like EA and rationality and air safety ecosystem
1:49:40
That is one of the things I like most about rationalists, that we do get people on the far left and the far right talking to each other. And most of us are somewhat in the centrist. We got weird mixes of ideas of both the left and the right.
1:49:54
And I’ve had some conversations here that I know I could not have anywhere else. And with people I disagree with, right? But the point is that I can talk to them here. And I think that is extremely valuable. And I personally think that it is worth extra money to have places where you can
1:50:09
talk about these sorts of things. If anyone else feels that it’s important to be able to talk with people you disagree with without thinking that they’re evil and without trying to silence them, this is one of the good places to put your money.
1:50:19
I agree. I hope so.
1:50:21
Well put. Well, two last things, I guess. What is the future of Lighthaven?
1:50:26
I mean, in some sense, we described it where the river and shore, I think, kind of describes roughly where I hope things to develop. We keep running great events here. I think hopefully, if things go well enough, we can kind of physically expand the space as the shore that is slowly where the sediment, I don’t know,
1:50:42
like the sediment part of the metaphor, but where the cool people are depositing itself and the network is growing and kind of the more permanent part kind of starts growing. I hope we can make it so that that doesn’t displace the river and the great things coming through.
1:50:56
But I hope to grow that more and more as we keep having great more events here. And I really want this place to just be like an intellectual hub of the world. And I think we’re on track to that. Like, I think so far, every big conference that has run an event here wants to come back.
1:51:10
If you extrapolate that into the future, I think we can easily be talking about a world where we’re having just 30 of the 100 best conferences in the world happening here and really just feeling like I don’t necessarily think like bell apps or Athens is the right
1:51:25
abstraction my best guess is something closer to Vienna at the height of like a lot of the original coffee shops of being a place where just like the idea exchange is happening where you can feel you can talk to other people about really anything and get people from all places across the world united by a
1:51:41
shared sanity a shared rationality a shared mechanisms of making sense of the world hopefully it goes and hopefully this all happens before ai kills us all
1:51:51
On that final note, this is going to take just the tiniest bit of setup. Harry Potter and Methods of Rationality. Yes. Wonderful novel, web series, I guess. The last chapters were published on Pi Day of 2015. That’s right. And Eliezer on LessWrong, on HPMore.com, on all the places that he had to talk to people about this, said,
1:52:11
on that day, find your local LessWrong meetup. Here’s a whole site for people to coordinate. We can celebrate this thing wrapping up. It was, I don’t know about everywhere in the world, but many places, it was a huge showing point for Rationalists to come and meet each other.
1:52:24
I remember Steven and me had been trying to get the Denver Rationalist scene going for a number of months, not a year, but probably like half a year. And it was a struggle. We would get one or two people coming each time. They wouldn’t come back because there had only been one or two people, right?
1:52:40
When we booked a space to have the HP Moi Rap Party, we booked it originally for maybe 10 people. And we had to move twice that night because more than 40 people showed up. I think it was probably 50, 50 to 60 people that came. Yeah. At the end of the night, I said,
1:52:58
anybody who likes these sort of people, this sort of conversation, we’re meeting here again in one month’s time. We’re going to be doing this every month. That kicked off the Denver Rationalist scene. Yeah. And the Denver Rationalist scene has been going ever since then. And I heard you had a lot to do with the HBO rap.
1:53:14
That’s right. Despite all, you know, my greatest achievement is, I think, in the eighth chapter from the end of HBMOR, I have my own presence. I have my own character, Oliver Haberker, sixth-year Gryffindor with a purple tie, giving a completely bullshit speech about the death of the defense professor.
1:53:36
I wouldn’t necessarily say that I got the most generous portrayal, but I am very proud of that. And I think that was directly the result of indeed me basically running the whole project of the HBMW Rap Parties. It was kind of one of the first things I did after I had started an undergrad and
1:53:50
moved to the US. And I’m very proud of it. And if that’s how I got started with any of this, that was the first big project that I took on. And I think it just worked. And I saw that I could really improve this whole ecosystem and community by just making various things happen.
1:54:04
I didn’t know I was talking to the person that I could thank for that right now. That’s awesome. I’m very glad. Yeah, thank you so much. I mean, it’s like you said, we’re coming up on 10 years for the local Denver area Lesseron.
1:54:16
Yeah, and that lines up with the 10 years that I’ve been around.
1:54:20
And that brings me to my slightly self-interested follow-up question. Is there any plans for a 10-year anniversary sort of re-wrap party shelling point, getting everybody in the various cities to come together again? Because so many people showed up that would not have come out otherwise.
1:54:40
And I don’t know if there’s other cities that also started their rationalist scenes with this, but if there was a 10-year anniversary, I bet there’d be a boost to the local scenes in a lot of places.
1:54:49
I mean, that’s a really cool idea. I really don’t know whether it would work. Yeah, I haven’t considered it at all. I wasn’t thinking. But like, yeah, the 10-year anniversary of the end of HP More, that just seems like one of those things. And Pi Day already is a good selling point to meet. Yeah.
1:55:02
Do you know, last year, I think that year presumably fell on a weekend. We might have to slightly… it was a work day oh it was a work day yeah i remember well if it worked that well
1:55:11
on a work day the first time i think i think i’m pretty sure it was anyway this is a 10 year old memory so let me go ahead and give that 50 50 odds okay i mean i love
1:55:20
the idea i will definitely like seriously consider it i think we could promote the hell out of it and also just like h prumar still amazingly great book people should read it yeah like i think it’s still just like one of the best things to read out there
1:55:34
I bet we could get all the rationalist bloggers to talk about it. Probably Scott Alexander would be willing to say something about it.
1:55:40
I mean, Scott has, I think it was like two or three open threads ago, he was like, I’m pre-committing to no longer advertising any Lighthaven events in my open thread until the end of the year, for otherwise I will become nothing else but a Lighthaven advertising machine. Awesome.
1:55:56
But, you know, that’s next year. Yeah, exactly. All good enough.
1:55:59
And it’s worldwide. You can advertise every single event except Lightheden. You can explicitly say, I am letting you all know that the HBO Brad parties are happening all around the world, except Berkeley, California, which is also happening, but I’m not advertising.
1:56:15
Exactly. It did so much for us in Denver that I think it would be great to have something like that again. I guess, finally, where can people go to… I mean, lesswrong.com, obviously, for lesswrong.
1:56:25
Lesswrong.com will be in the banner. Lesswrong is fundraising. If you’re on mobile, there will be tiny dollar sign next to the LW. You can click on that. It’s also a giant thermometer telling you how far we are to our first goal of $1 million on the front page. If you can’t find it, something went wrong.
1:56:43
Either in your life or mine. I’m not sure what. I laughed at first because I thought I was going to ask where can people find you. And the answer is also lesswrong.com. That’s right.
1:56:51
If you want to find me, just go on lesswrong.com. You’ll probably see me either be in an argument with some random commenter somewhere about AI alignment or clean up some random moderation threads telling people to please calm down.
1:57:04
All right. Thank you for joining us. Thank you. And this has been wonderful. And my stay here has been amazing. Oh, my God. I did want to ask… Hopefully really fast, maybe like one minute. What sparked the idea for Eternal September?
1:57:17
I mean, the key thing was just like, we were kind of nearing the end of the year. The end of the year is generally the slowest time of the year. And we were just like, let’s just make a thing that people can come and check out Lighthaven when we don’t have a ton of events going on.
1:57:29
Yeah, it was a very complicated thing where we tried to communicate. We don’t have the capacity to make it amazing, but we do think the space is great and the other people here will be great. We tried to do something slightly ironic where we’re like, we are opening the floodgates to everyone,
1:57:45
which canonically has caused the eternal September. Right.
1:57:49
It’s been an absolutely amazing experience.
1:57:52
And as a mere visitor and not a resident, I can confirm it was amazing.
1:57:55
All right. Thank you so much. Thanks again, Oliver. Indeed. Thank you. Bye. Steven, welcome back. Thanks, man. How are you doing? I am doing great. We went so long that I think Ollie said his voice was starting to go by the end there.
1:58:06
Yeah. I think this will probably hit the cutting room floor, but at some point I had said, hey, look, in eight minutes, I’ll have been at my desk for 13 hours today. I’m running out of stamina. But that was about 10 minutes before the end. But the thing is, I found it riveting.
1:58:19
And I’m not sure how this will work into the episode either, but I jumped out for a few minutes to handle some cat business cause we’re cat sitting this week. And, uh, They were being dropped off and had to get all their stuff in and acclimated and feeding schedules and all that business.
1:58:31
So we have a couple of cats in the house to take care of. And so there’s like, I don’t know, 20 minutes of the episode I get to listen to. So that’ll be fun. Oh, yeah. I think you’ll have a good time. Yeah, I’m sure I will.
1:58:41
I jumped right back in when you guys were talking about Fallout Vault Tech experiments. I need to find the context for this.
1:58:47
Yeah, you’re going to have an interesting time when you get to that.
1:58:51
Awesome.
1:58:52
We should go and do the thing we do right after our main section every time, which is thank the Guild of the Rose, who we are partnered with, which is, it seems a little awkward this week because...
1:59:02
We basically shilled for Lighthaven the whole time.
1:59:05
Yes, exactly.
1:59:06
I’m happy to cross that bridge because Guild of the Rose and Lighthaven have two very different goals. Like Lighthaven wants to be sort of like a physical hub for the rationalist community and host big conferences and stuff like that. The Guild of the Rose is more for you, the individual listening.
1:59:22
And importantly, a lot of the focus at the Guild of the Rose is on improving yourself and the sanity waterline of both yourself and maybe like other people you talk to that trying to help spread rationality to everybody, right? Guild of the Rose is more about improving and getting to that place over time.
1:59:38
Yeah, they have the goal and actual workshops and everything to combine practical life skills, mental and physical wellness, rational thinking in a way that helps everybody who participates reach their full potential. Lighthaven doesn’t do that. They do cool stuff, but they don’t do that. And honestly, this is not a diss on Oliver.
1:59:58
I think that Lighthaven is awesome and should exist. But if I had to pick an organization, it would be the Guild of the Rose. Everyone should check it out. Honestly, we’ve pitched it, I don’t know, for a year or two every episode. If you haven’t checked out the website yet, see what’s going on and be like,
2:00:11
oh shit, I’m missing out. I should have checked this out two years ago. I was thinking about this on how I would sell rationality to somebody. Like to somebody who just didn’t have the same wiring that made them really prioritize truth seeking and life enhancement and stuff.
2:00:24
To be able to look at just a real life question of like, will I be happier at another job? And be able to come up with an answer that’s useful. People would pay out the nose for that, right? Well, it turns out that there are ways to learn how to make better decisions and make more informed decisions.
2:00:38
predictions about the future and recognize how good you are predicting the future. Yeah. And all of that is through rationality training, which you can get from gilderose.org.
2:00:47
Oh, yeah. And also a link in the show notes. Yes. Yeah. We believe in this people and this is why we partner with them every single episode. Totally.
2:00:55
I didn’t come off too harsh on Lighthaven, did I? I’ll emphasize that, again, they’re doing two very different things.
2:01:00
Yeah.
2:01:01
I have a small personal feedback. One of the things that Less Wrong does every year is collate the best posts of the previous year. They promote them on the website as like, these are the best ones. And also they put them into a little print book that you can buy. In fact,
2:01:14
the very first time we had Oliver on, he was on with Ben shortly after Less Wrong had been rebooted to Less Wrong 2.0. And this was one of the things they were talking about. Like, hey, you can buy these things now. And I think both you and me own the first year’s collection, right? Yep. It’s really cool.
2:01:28
It’s a great thing that they do. The reason I bring this up is because one of my posts has been nominated for potentially being one of the top 50 posts of the year, which I find flattering and awesome. The weird thing is like, it’s an out of distribution, less wrong post.
2:01:43
Everyone knows what less wrong posts are mostly like. You talk about a cool insight in the world, something that updates you about actual physical reality. I didn’t do that. I wrote a piece of fiction, but it was like deeply rat adjacent to the point where like, I think I could put this up on less wrong.
2:01:58
And I did, and it got a decent handful of upvotes, but it got at least a couple nominations to be in the best of the year. So now apparently that’s a thing. Everybody who gets a nomination is encouraged to do a self-review of their post, which he says is like, go back.
2:02:14
Now that it’s been a year, do you still agree with this? How would you have changed what you initially said, etc.? ? That doesn’t quite apply to a piece of fiction, but I’m going to go ahead and write a self-review tonight. But the thing is, if you have read it, you can go to LessWrong,
2:02:26
and I’ll put a link in the show notes, and you can leave a review as well. You can either upvote it to be included, or if you think that it’s not really actually worth being in the top 50, that there’s enough other interesting things that they should not crowd it out,
2:02:38
you can downvote it to be in the top 50 as well. It’s a short story that I wrote, and I don’t know, I’m just kind of excited, so I’m letting people know if you want to review it or talk about it or anything.
2:02:48
There’ll be a link in the show notes and thank you in advance, regardless of what you do.
2:02:52
Is that the real fanfic is the friends who made along the way? Yes, it is. You know, that came out in October of 2023. Okay. Aren’t they doing the 2024 nominations?
2:03:01
No, no, no. It’s always one year in arrears.
2:03:04
Oh, well, in that case, then yes, this is perfect. And everyone should check it out. This is a lot of fun. Like I said, it’s not the boilerplate less wrong post. This is a fun short story. One that I think I must be older than 2023 by now.
2:03:14
But another great short story post was one by our good friend Matt Freeman. on the parable of the king that one is actually also been nominated was that in
2:03:23
2023 as well it must have been it must yeah they always they always do it like with an entire year in arrears because they want them to be able to age and so like the ones that people actually remember are the ones that come up and that they’ve stood
2:03:34
at least a little bit of test of time awesome well that’s perfect i’m all over it
2:03:38
but yeah matt freeman’s was also nominated and so i guess we can also post a link to that one huzzah all right that brings us to the very last thing we need to do
2:03:45
yes which is i think our uh well we love and appreciate all of our listeners all exactly the same but every every week we get we give one gets a special shout out and i think i got the privilege last episode this week we are thinking cap’n corti
2:04:02
Thank you, Captain. Thank you very much for your support. I salute you as you are, Captain. And also, I see you on the Discord all the time. So thanks for your contributions there, too. You’re awesome.
2:04:12
Yeah, Cap, you rock. We see you around all the time. Thank you for your support. It means a lot.
2:04:16
It does. It helps us keep going. It’s going to pay for the new mixer that I am buying, hopefully in the next two days. You think that’s all right?
2:04:21
Yes, it’s about time. Okay. This one has been knocking at death’s door for a while and we’re finally answering. Gives us some compensation for stuff and pays for hosting costs and everything. So again, Captain QWERTY, you rock. Anyone else who wants to support and or hang out with people, check out the Discord link in the show notes.
2:04:42
There’s also a link to our Patreon and our sub stack there. If anyone else wants to throw a couple bucks our way, we’re super into it.
2:04:47
We really do appreciate it. The money helps everyone keep going, you know? It’s the unit of caring, right? Yes. To care about something, this is one way to show that.
2:04:54
Absolutely.
2:04:55
As I learned when, you know, yeah. I donated a significant chunk of my income to Light Haven this year. Yes, and you will be immortalized for it. Oh, also, this is a very special Christmas episode.
2:05:08
Oh, yes. We don’t usually air on Christmas, do we? But this is urgent news, people.
2:05:13
It is. We always take off the last episode of the year, but not this time. And that means that not only are we putting out one extra episode this year, we are putting it out literally on Christmas because that just happened to be how the days lined up.
2:05:25
Merry Christmas if you happen to be listening to this on Christmas, and if not, happy Christmas and or whatever holidays you like to enjoy. And if you hate the holidays, happy winter, and we’re on the other side of the solstice, so happy longer days.
2:05:37
Yeah, we wanted it to go out before the end of the year, for the obvious reasons.
2:05:40
To be clear, I believe they are taking donations past December 31st.
2:05:45
Oh yeah, they’re taking donations all of next year too. It’s just that if you donate before the end of the year, you can take that money off on your taxes this year, because it is a 501c3.
2:05:53
Oh, I didn’t know that. Great. All right. Well, people should do that. Yeah, we should have clarified that.
2:05:56
Steven, this was a delight. Thank you. And I’ll talk to you again in a few days. Sounds great, man. Appreciate it. Bye-bye.
I have added a collapsible section with a copy of the very bad transcript to the post! Seemed useful to have it.
At least when the link opens the substack app on my phone, I see no such transcript.
available on the website at least