[Question] Are there AI policies that are robustly net-positive even when considering different AI scenarios?

One thing I might have noticed recently is a lot of governance on AI might require specific models of AI in society, especially over misalignment or misuse concerns.

Are there AI policies that could be robustly-net positive without having to tie it to a specific scenario?

No comments.