Also, if I may ask “no longer seems sufficient”. Did you thought it was? The sentence seems really strange to be honest, or otherwise I’d be curious if you have a text where you explained why you thought that, as it seems quite surprising
I do think something like this is kind of correct. It’s not that I thought that nothing else had to happen between now and then for humanity to successfully reach the stars, but I did meaningfully think that there were a good number of universes where my work on LessWrong would make the difference (with everyone else of course also doing things), and that I was really moving probability mass.
I still think I moved some probability mass, but I further updated that in order to realize a bunch of that probability mass that I was hoping for, I need to get some other pieces in place. Which is something I didn’t think was as necessary, and I used to think more that the online component of things would itself be sufficient to realize a lot of that probability mass.
I definitely didn’t believe that if I were to just make LessWrong great, existential risk would be solved in most worlds.
Having an in-person campus that allows people to have really good high-bandwidth communication is a big component that I now think is a really useful thing to have in many worlds.
On a higher level of abstraction, I have an internal model that suggests something like the following three components are things that are quite important for AGI (and some other x-risks) to go right:
The ability to do really good research and be really good at truth-seeking (necessary to solve various parts of the AI Alignment problem, and also in general just seems really important for a community to have for lots of reasons)
The ability to take advantage of crises and navigate really quickly changing situations (as a concrete intuition pump, I currently believe that before something like AGI we will probably have something like 10 more years at least as crazy as 2020, and I have a sense that some of the worlds where things go well, are worlds where a bunch of people concerned about AI Alignment are well set-up to take advantage of them, and make sure to not get wiped out by them)
The ability to have high-stakes negotiations with large piles of resources and people (like, I think it’s pretty plausible that in order to actually get the right AI Alignment solution deployed, and to avoid us getting killed some other way before then, some people who have some of the relevant components of solutions will need to negotiate in some pretty high-stakes situations to actually make them happen. And in a much more coherent way than people are currently capable of.)
Though these are all pretty abstract and high-level, and I have a lot of concrete thoughts that are less abstract, though it would take me a while to write them up.
I do think something like this is kind of correct. It’s not that I thought that nothing else had to happen between now and then for humanity to successfully reach the stars, but I did meaningfully think that there were a good number of universes where my work on LessWrong would make the difference (with everyone else of course also doing things), and that I was really moving probability mass.
I still think I moved some probability mass, but I further updated that in order to realize a bunch of that probability mass that I was hoping for, I need to get some other pieces in place. Which is something I didn’t think was as necessary, and I used to think more that the online component of things would itself be sufficient to realize a lot of that probability mass.
I definitely didn’t believe that if I were to just make LessWrong great, existential risk would be solved in most worlds.
What do you think these components are?
Having an in-person campus that allows people to have really good high-bandwidth communication is a big component that I now think is a really useful thing to have in many worlds.
On a higher level of abstraction, I have an internal model that suggests something like the following three components are things that are quite important for AGI (and some other x-risks) to go right:
The ability to do really good research and be really good at truth-seeking (necessary to solve various parts of the AI Alignment problem, and also in general just seems really important for a community to have for lots of reasons)
The ability to take advantage of crises and navigate really quickly changing situations (as a concrete intuition pump, I currently believe that before something like AGI we will probably have something like 10 more years at least as crazy as 2020, and I have a sense that some of the worlds where things go well, are worlds where a bunch of people concerned about AI Alignment are well set-up to take advantage of them, and make sure to not get wiped out by them)
The ability to have high-stakes negotiations with large piles of resources and people (like, I think it’s pretty plausible that in order to actually get the right AI Alignment solution deployed, and to avoid us getting killed some other way before then, some people who have some of the relevant components of solutions will need to negotiate in some pretty high-stakes situations to actually make them happen. And in a much more coherent way than people are currently capable of.)
Though these are all pretty abstract and high-level, and I have a lot of concrete thoughts that are less abstract, though it would take me a while to write them up.