Having an in-person campus that allows people to have really good high-bandwidth communication is a big component that I now think is a really useful thing to have in many worlds.
On a higher level of abstraction, I have an internal model that suggests something like the following three components are things that are quite important for AGI (and some other x-risks) to go right:
The ability to do really good research and be really good at truth-seeking (necessary to solve various parts of the AI Alignment problem, and also in general just seems really important for a community to have for lots of reasons)
The ability to take advantage of crises and navigate really quickly changing situations (as a concrete intuition pump, I currently believe that before something like AGI we will probably have something like 10 more years at least as crazy as 2020, and I have a sense that some of the worlds where things go well, are worlds where a bunch of people concerned about AI Alignment are well set-up to take advantage of them, and make sure to not get wiped out by them)
The ability to have high-stakes negotiations with large piles of resources and people (like, I think it’s pretty plausible that in order to actually get the right AI Alignment solution deployed, and to avoid us getting killed some other way before then, some people who have some of the relevant components of solutions will need to negotiate in some pretty high-stakes situations to actually make them happen. And in a much more coherent way than people are currently capable of.)
Though these are all pretty abstract and high-level, and I have a lot of concrete thoughts that are less abstract, though it would take me a while to write them up.
Having an in-person campus that allows people to have really good high-bandwidth communication is a big component that I now think is a really useful thing to have in many worlds.
On a higher level of abstraction, I have an internal model that suggests something like the following three components are things that are quite important for AGI (and some other x-risks) to go right:
The ability to do really good research and be really good at truth-seeking (necessary to solve various parts of the AI Alignment problem, and also in general just seems really important for a community to have for lots of reasons)
The ability to take advantage of crises and navigate really quickly changing situations (as a concrete intuition pump, I currently believe that before something like AGI we will probably have something like 10 more years at least as crazy as 2020, and I have a sense that some of the worlds where things go well, are worlds where a bunch of people concerned about AI Alignment are well set-up to take advantage of them, and make sure to not get wiped out by them)
The ability to have high-stakes negotiations with large piles of resources and people (like, I think it’s pretty plausible that in order to actually get the right AI Alignment solution deployed, and to avoid us getting killed some other way before then, some people who have some of the relevant components of solutions will need to negotiate in some pretty high-stakes situations to actually make them happen. And in a much more coherent way than people are currently capable of.)
Though these are all pretty abstract and high-level, and I have a lot of concrete thoughts that are less abstract, though it would take me a while to write them up.