With that in mind, I was surprised by the lack of information in this funding request. I feel mixed about this: high-status AIS orgs often (accurately) recognize that they don’t really need to spend time justifying their funding requests, but I think this often harms community epistemics (e.g., by leading to situations where everyone is like “oh X org is great—I totally support them” without actually knowing much about what work they’re planning to do, what models they have, etc.)
Sorry about that! I’ve drafted like 3-4 different fundraising posts over the last few months, most of which were much longer and had more detail, but I also repeatedly ran into problems that when I showed them to people, they ended up misunderstanding how Lightcone relates to our future work and thinking about our efforts in a very narrow way by overfitting to all the details I put into the post, while missing something important about the way we expect to approach things over the coming years.
I ended up deciding to instead publish a short post, expecting that people will write a lot of questions in the comments, and then to engage straightforwardly and transparently there, which felt like a way that was more likely to end up with shared understanding. Not sure whether this was the right call, and definitely a good chunk of my decisions here are also driven by finding fundraising to be the single most stressful aspect of running Lightcone, and I just find navigating the stress of that easier if I respond to things in a more reactive way.
What are Lightcone’s plans for the next 3-6 months? (is it just going to involve continuing the projects that were listed?)
We are going to be wrapping up renovations and construction for the next month, and will then host a bunch of programs over the summer (like SERI MATS and a bunch of workshops and conferences). During that time I hope to reconnect a bit more with the surrounding AI-Alignment/X-Risk/Rationality/EA/Longtermist-adjacent diaspora, which I intentionally took a pretty big step away from after the collapse of FTX.
I will also be putting a bunch of efforts into Lightspeed Grants. We will see how much traction we get here, but I definitely think there is a chance that blows up into the primary project we’ll be working on for a while, since I think there is a lot of value in diversifying and improving the funding ecosystem, which currently seems to drive a lot of crazy status-dynamics and crazy epistemics within people working on AI risk stuff.
After that, I expect to focus a lot more on online things. I probably want to do a major revamp of the AI Alignment Forum, as well as focus a lot of my attention more on LessWrong again. I am particularly excited about finally properly launching the dialogues feature and driving adoption of that, probably in parts by me and other Lightcone team members participating in a lot of dialogues while we also continue developing the technology on the backend.
How is Lightcone orienting to the recent rise in interest in AI policy? Which policy/governance plans (if any) does Lightcone support?
I’ve been thinking a lot about this, and I don’t yet have a clear answer. My tentative guess is something like “with a lot of the best people that I know going into AI policy stuff and the hype/excitement around that increasing, the comparative advantage of the Lightcone team is actually even more strongly pointing in the direction of focusing on research forums that ground the epistemic health of the people jumping headfirst into policy stuff”. This means I currently expect to not get super deeply involved, but to interface a lot with people who are jumping into the policy fray, moving to DC, etc. and to figure out what infrastructure can allow those people to stay sane and grounded, since I do really expect that as we get more involved in advocacy, politics and policy-making that thinking clearly will become a lot harder.
But again, I don’t know yet, and at the meta-level I might just organize a bunch of events to help people orient to the shifting policy landscape while I am orienting myself to it as well.
What is Lightcone’s general worldview/vibe these days? (Is it pretty much covered in this post?) Where does Lightcone disagree with other folks who work on reducing existential risk?
This sure seems hard to answer concisely. Hopefully you can figure out our vibe from my comments and posts. I still endorse a lot of the post you linked, though also changed my mind on a bunch of stuff. I might write more here later. I think this is a valid question, this comment is just already getting very long and I don’t have an immediate good cached answer.
What are Lightcone’s biggest “wins” and “losses” over the past ~3-6 months?
In my books I think by far the biggest win is that I think we relatively successfully handled a really major shift in relationship to the rest of our surrounding community, as an organization whose lifeblood is building infrastructure. My sense is that in all previous organization I’ve worked in, I would have rather left or shut the organization down when FTX collapsed, because I wouldn’t have been able to think clearly and see things with fresh eyes, and orient with my organization to the changing landscape. I think Lightcone successfully handled reorienting together, and I think this is really hard (and probably the result of us staying consistently very small in our permanent staff headcount, which is currently just 8 people).
We also built an IMO really amazing campus that I expect to utilize and get a lot of value out of for the next few years. I also think I am proud of me writing a lot of things publicly on the EA Forum during the time of the FTX collapse and afterwards. I think it also helped the rest of the ecosystem orient better, and a lot of the things were things that nobody else was saying and seemed quite important.
Thanks for this detailed response; I found it quite helpful. I maintain my “yeah, they should probably get as much funding as they want” stance. I’m especially glad to see that Lightcone might be interested in helping people stay sane/grounded as many people charge into the policy space.
I ended up deciding to instead publish a short post, expecting that people will write a lot of questions in the comments, and then to engage straightforwardly and transparently there, which felt like a way that was more likely to end up with shared understanding.
This seems quite reasonable to me. I think it might’ve been useful to include something short in the original post that made this clear. I know you said “also feel free to ask any questions in the comments”; in an ideal world, this would probably be enough, but I’m guessing this isn’t enough given power/status dynamics.
For example, if ARC Evals released a post like this, I expect many people would experience friction that prevented them from asking (or even generating) questions that might (a) make ARC Evals look bad, (b) make the commenter seem dumb, or (c) potentially worsen the relationship between the commenter and ARC evals.
To Lightcone’s credit, I think Lightcone has maintained a (stronger) reputation of being fairly open to objections (and not penalizing people for asking “dumb questions” or something like that), but the Desire Not to Upset High-status People or Desire Not to Look Dumb In Front of Your Peers By Asking Things You’re Already Supposed to Know are strong.
I’m guessing that part of why I felt comfortable asking (and even going past the “yay, I like Lightcone and therefore I support this post” to the mental motion of “wait, am I actually satisfied with this post? What questions do I have”) is that I’ve had a chance to interact in-person with the Lightcone team on many occasions, so I felt considerably less psychological friction than most.
All things considered, perhaps an ideal version of the post would’ve said something short like “we understand we haven’t given any details about what we’re actually planning to do or how we’d use the funding. This is because Oli finds this stressful. But we actually really want you to ask questions, even “dumb questions”, in the comments.”
(To be clear I don’t think the lack of doing this was particularly harmful, and I think your comment definitely addresses this. I’m nit-picking because I think it’s an interesting microcosm of broader status/power dynamics that get in the way of discourse, and because I expect the Lightcone team to be unusually interested in this kind of thing.)
Sorry about that! I’ve drafted like 3-4 different fundraising posts over the last few months, most of which were much longer and had more detail, but I also repeatedly ran into problems that when I showed them to people, they ended up misunderstanding how Lightcone relates to our future work and thinking about our efforts in a very narrow way by overfitting to all the details I put into the post, while missing something important about the way we expect to approach things over the coming years.
I ended up deciding to instead publish a short post, expecting that people will write a lot of questions in the comments, and then to engage straightforwardly and transparently there, which felt like a way that was more likely to end up with shared understanding. Not sure whether this was the right call, and definitely a good chunk of my decisions here are also driven by finding fundraising to be the single most stressful aspect of running Lightcone, and I just find navigating the stress of that easier if I respond to things in a more reactive way.
We are going to be wrapping up renovations and construction for the next month, and will then host a bunch of programs over the summer (like SERI MATS and a bunch of workshops and conferences). During that time I hope to reconnect a bit more with the surrounding AI-Alignment/X-Risk/Rationality/EA/Longtermist-adjacent diaspora, which I intentionally took a pretty big step away from after the collapse of FTX.
I will also be putting a bunch of efforts into Lightspeed Grants. We will see how much traction we get here, but I definitely think there is a chance that blows up into the primary project we’ll be working on for a while, since I think there is a lot of value in diversifying and improving the funding ecosystem, which currently seems to drive a lot of crazy status-dynamics and crazy epistemics within people working on AI risk stuff.
After that, I expect to focus a lot more on online things. I probably want to do a major revamp of the AI Alignment Forum, as well as focus a lot of my attention more on LessWrong again. I am particularly excited about finally properly launching the dialogues feature and driving adoption of that, probably in parts by me and other Lightcone team members participating in a lot of dialogues while we also continue developing the technology on the backend.
I’ve been thinking a lot about this, and I don’t yet have a clear answer. My tentative guess is something like “with a lot of the best people that I know going into AI policy stuff and the hype/excitement around that increasing, the comparative advantage of the Lightcone team is actually even more strongly pointing in the direction of focusing on research forums that ground the epistemic health of the people jumping headfirst into policy stuff”. This means I currently expect to not get super deeply involved, but to interface a lot with people who are jumping into the policy fray, moving to DC, etc. and to figure out what infrastructure can allow those people to stay sane and grounded, since I do really expect that as we get more involved in advocacy, politics and policy-making that thinking clearly will become a lot harder.
But again, I don’t know yet, and at the meta-level I might just organize a bunch of events to help people orient to the shifting policy landscape while I am orienting myself to it as well.
This sure seems hard to answer concisely. Hopefully you can figure out our vibe from my comments and posts. I still endorse a lot of the post you linked, though also changed my mind on a bunch of stuff. I might write more here later. I think this is a valid question, this comment is just already getting very long and I don’t have an immediate good cached answer.
In my books I think by far the biggest win is that I think we relatively successfully handled a really major shift in relationship to the rest of our surrounding community, as an organization whose lifeblood is building infrastructure. My sense is that in all previous organization I’ve worked in, I would have rather left or shut the organization down when FTX collapsed, because I wouldn’t have been able to think clearly and see things with fresh eyes, and orient with my organization to the changing landscape. I think Lightcone successfully handled reorienting together, and I think this is really hard (and probably the result of us staying consistently very small in our permanent staff headcount, which is currently just 8 people).
We also built an IMO really amazing campus that I expect to utilize and get a lot of value out of for the next few years. I also think I am proud of me writing a lot of things publicly on the EA Forum during the time of the FTX collapse and afterwards. I think it also helped the rest of the ecosystem orient better, and a lot of the things were things that nobody else was saying and seemed quite important.
Thanks for this detailed response; I found it quite helpful. I maintain my “yeah, they should probably get as much funding as they want” stance. I’m especially glad to see that Lightcone might be interested in helping people stay sane/grounded as many people charge into the policy space.
This seems quite reasonable to me. I think it might’ve been useful to include something short in the original post that made this clear. I know you said “also feel free to ask any questions in the comments”; in an ideal world, this would probably be enough, but I’m guessing this isn’t enough given power/status dynamics.
For example, if ARC Evals released a post like this, I expect many people would experience friction that prevented them from asking (or even generating) questions that might (a) make ARC Evals look bad, (b) make the commenter seem dumb, or (c) potentially worsen the relationship between the commenter and ARC evals.
To Lightcone’s credit, I think Lightcone has maintained a (stronger) reputation of being fairly open to objections (and not penalizing people for asking “dumb questions” or something like that), but the Desire Not to Upset High-status People or Desire Not to Look Dumb In Front of Your Peers By Asking Things You’re Already Supposed to Know are strong.
I’m guessing that part of why I felt comfortable asking (and even going past the “yay, I like Lightcone and therefore I support this post” to the mental motion of “wait, am I actually satisfied with this post? What questions do I have”) is that I’ve had a chance to interact in-person with the Lightcone team on many occasions, so I felt considerably less psychological friction than most.
All things considered, perhaps an ideal version of the post would’ve said something short like “we understand we haven’t given any details about what we’re actually planning to do or how we’d use the funding. This is because Oli finds this stressful. But we actually really want you to ask questions, even “dumb questions”, in the comments.”
(To be clear I don’t think the lack of doing this was particularly harmful, and I think your comment definitely addresses this. I’m nit-picking because I think it’s an interesting microcosm of broader status/power dynamics that get in the way of discourse, and because I expect the Lightcone team to be unusually interested in this kind of thing.)