My wife and I just donated $10k, and will probably donate substantially more once we have more funds available.
LW 1.0 was how I heard about and became interested in AGI, x-risk, effective altruism, and a bunch of other important ideas. LW 2.0 was the place where I learned to think seriously about those topics & got feedback from others on my thoughts. (I tried to discuss all this stuff with grad students at professors at UNC, where I was studying philosophy, with only limited success). Importantly, LW 2.0 was a place where I could write up my ideas in blog post or comment form, and then get fast feedback on them (by contrast with academic philosophy where I did manage to write on these topics but it took 10x longer per paper to write and then years to get published and then additional years to get replies from people I didn’t already know). More generally the rationalist community that Lightcone has kept alive, and then built, is… well, it’s hard to quantify how much I’d pay now to retroactively cause all that stuff to happen, but it’s way more than $10k, even if we just focus on the small slice of it that benefitted me personally.
Looking forward, I expect a diminished role, due simply to AGI being a more popular topic these days so there are lots of other places to talk and think about it. In other words the effects of LW 2.0 and Lightcone more generally are now (large) drops in a bucket whereas before they were large drops in an eye-dropper. However, I still think Lightcone is one of the best bang-for-buck places to donate to from an altruistic perspective. The OP lists several examples of important people reading and being influenced by LW; I personally know of several more.
...All of the above was just about magnitude of impact, rather than direction. (Though positive direction was implied). So now I turn to the question of whether Lightcone is consistently a force for good in the world vs. e.g. a force for evil or a high-variance force for chaos.
Because of cluelessness, it’s hard to say how things will shake out in the long run. For example, I wouldn’t be surprised if the #1 determinant of how things go for humanity is whether the powerful people (POTUS & advisors & maybe congress and judiciary) take AGI misalignment and x-risk seriously when AGI is imminent. And I wouldn’t be surprised if the #1 determinant of that is the messenger—which voices are most prominently associated with these ideas? Esteemed professors like Hinton and Bengio, or nerdy weirdos like many of us here? On this model, perhaps all the good Lightcone has done is outweighed by this unfortunate set of facts, and it would have been better if this website never existed.
However, I can also imagine other possibilities—for example, perhaps many of the Serious Respected People who are, and will, be speaking up about AGI and x-risk etc. were or will be influenced to do so by hearing arguments and pondering questions that originated on, or were facilitated by, LW 2.0. Or alternatively, maybe the most important thing is not the status of the messenger, but the correctness and rigor of the arguments. Or maybe the most important thing is not either of those but rather simply how much technical work on the alignment and control problems has been accomplished and published by the time of AGI. Or maybe… I could go on. The point is, I see multiple paths by which Lightcone could turn out, with the benefit of hindsight, to have literally prevented human extinction.
In situations of cluelessness like this I think it’s helpful to put weight on factors that are more about the first-order effects of the project & the character of the people involved, and less about the long-term second and third-order effects etc. I think Lightcone does great on these metrics. I think LW 2.0 is a pocket of (relative) sanity in an otherwise insane internet. I think it’s a way for people who don’t already have lots of connections/network/colleagues to have sophisticated conversations about AGI, superintelligence, x-risk, … and perhaps more importantly, also topics ‘beyond’ that like s-risk, acausal trade, the long reflection, etc. that are still considered weird and crazy now (like AGI and ASI and x-risk were twenty years ago). It’s also a place for alignment research to get published and get fast, decently high-quality feedback. It’s also a place for news, for explainer articles and opinion pieces, etc. All this seems good to me. I also think that Lighthaven has positively surprised me so far, it seems to be a great physical community hub and event space, and also I’m excited about some of the ideas the OP described for future work.
On the virtue side, in my experience Lightcone seems to have high standards for epistemic rationality and for integrity & honesty. Perhaps the highest, in fact, in this space. Overall I’m impressed with them and expect them to be consistently and transparently a force for good. Insofar as bad things result from their actions I expect it to be because of second-order effects like the status/association thing I mentioned above, rather than because of bad behavior on their part.
So yeah. It’s not the only thing we’ll be donating to, but it’s in our top tier.
My wife and I just donated $10k, and will probably donate substantially more once we have more funds available.
LW 1.0 was how I heard about and became interested in AGI, x-risk, effective altruism, and a bunch of other important ideas. LW 2.0 was the place where I learned to think seriously about those topics & got feedback from others on my thoughts. (I tried to discuss all this stuff with grad students at professors at UNC, where I was studying philosophy, with only limited success). Importantly, LW 2.0 was a place where I could write up my ideas in blog post or comment form, and then get fast feedback on them (by contrast with academic philosophy where I did manage to write on these topics but it took 10x longer per paper to write and then years to get published and then additional years to get replies from people I didn’t already know). More generally the rationalist community that Lightcone has kept alive, and then built, is… well, it’s hard to quantify how much I’d pay now to retroactively cause all that stuff to happen, but it’s way more than $10k, even if we just focus on the small slice of it that benefitted me personally.
Looking forward, I expect a diminished role, due simply to AGI being a more popular topic these days so there are lots of other places to talk and think about it. In other words the effects of LW 2.0 and Lightcone more generally are now (large) drops in a bucket whereas before they were large drops in an eye-dropper. However, I still think Lightcone is one of the best bang-for-buck places to donate to from an altruistic perspective. The OP lists several examples of important people reading and being influenced by LW; I personally know of several more.
...All of the above was just about magnitude of impact, rather than direction. (Though positive direction was implied). So now I turn to the question of whether Lightcone is consistently a force for good in the world vs. e.g. a force for evil or a high-variance force for chaos.
Because of cluelessness, it’s hard to say how things will shake out in the long run. For example, I wouldn’t be surprised if the #1 determinant of how things go for humanity is whether the powerful people (POTUS & advisors & maybe congress and judiciary) take AGI misalignment and x-risk seriously when AGI is imminent. And I wouldn’t be surprised if the #1 determinant of that is the messenger—which voices are most prominently associated with these ideas? Esteemed professors like Hinton and Bengio, or nerdy weirdos like many of us here? On this model, perhaps all the good Lightcone has done is outweighed by this unfortunate set of facts, and it would have been better if this website never existed.
However, I can also imagine other possibilities—for example, perhaps many of the Serious Respected People who are, and will, be speaking up about AGI and x-risk etc. were or will be influenced to do so by hearing arguments and pondering questions that originated on, or were facilitated by, LW 2.0. Or alternatively, maybe the most important thing is not the status of the messenger, but the correctness and rigor of the arguments. Or maybe the most important thing is not either of those but rather simply how much technical work on the alignment and control problems has been accomplished and published by the time of AGI. Or maybe… I could go on. The point is, I see multiple paths by which Lightcone could turn out, with the benefit of hindsight, to have literally prevented human extinction.
In situations of cluelessness like this I think it’s helpful to put weight on factors that are more about the first-order effects of the project & the character of the people involved, and less about the long-term second and third-order effects etc. I think Lightcone does great on these metrics. I think LW 2.0 is a pocket of (relative) sanity in an otherwise insane internet. I think it’s a way for people who don’t already have lots of connections/network/colleagues to have sophisticated conversations about AGI, superintelligence, x-risk, … and perhaps more importantly, also topics ‘beyond’ that like s-risk, acausal trade, the long reflection, etc. that are still considered weird and crazy now (like AGI and ASI and x-risk were twenty years ago). It’s also a place for alignment research to get published and get fast, decently high-quality feedback. It’s also a place for news, for explainer articles and opinion pieces, etc. All this seems good to me. I also think that Lighthaven has positively surprised me so far, it seems to be a great physical community hub and event space, and also I’m excited about some of the ideas the OP described for future work.
On the virtue side, in my experience Lightcone seems to have high standards for epistemic rationality and for integrity & honesty. Perhaps the highest, in fact, in this space. Overall I’m impressed with them and expect them to be consistently and transparently a force for good. Insofar as bad things result from their actions I expect it to be because of second-order effects like the status/association thing I mentioned above, rather than because of bad behavior on their part.
So yeah. It’s not the only thing we’ll be donating to, but it’s in our top tier.