Looking at these, I feel like they are subquestions of “how do you design a good society that can handle technological development”—most of it is not AI-specific or CAIS-specific.
For me this is the main point of CAIS. It reframes many AI Safety problems in terms of “make a good society” problems, but now you can consider scenarios involving only AI. We can start to answer the question of “how do we make a good society of AIs?” with the question “How did we do it with humans?”. It seems like human society did not have great outcomes for everyone by default. Making human society function took a lot of work, and failed a lot of times. Can we learn from that and make AI Society fail less often or less catastrophically?
Yeah, I understand that. My point is that the same way society didn’t work by default, systems of AI won’t work by default, and that the interventions that will be needed will require AI researchers. That is, it’s not just about setting up laws, norms, contracts, and standards for managing these systems. It is about figuring out how to make AI systems which interact with each other in the way that humans do in the presence of laws, norms, standards and contracts. Someone who is not an AI research would have no hope in solving this, since they cannot understand how AI systems will interact, and cannot offer appropriate interventions.