I agree it’s important to be careful about which policies we push for, but I disagree both with the general thrust of this post and the concrete example you give (“restrictions on training data are bad”).
Re the concrete point: it seems like the clear first-order consequence of any strong restriction is to slow down AI capabilities. Effects on alignment are more speculative and seem weaker in expectation. For example, it may be bad if it were illegal to collect user data (eg from users of chat-gpt) for fine-tuning, but such data collection is unlikely to fall under restrictions that digital artists are lobbying for.
Re the broader point: yes, it would be bad if we just adopted whatever policy proposals other groups propose. But I don’t think this is likely to happen! In a successful alliance, we would find common interests between us and other groups worried about AI, and push specifically for those. Of course it’s not clear that this will work, but it seems worth trying.
I agree it’s important to be careful about which policies we push for, but I disagree both with the general thrust of this post and the concrete example you give (“restrictions on training data are bad”).
Re the concrete point: it seems like the clear first-order consequence of any strong restriction is to slow down AI capabilities. Effects on alignment are more speculative and seem weaker in expectation. For example, it may be bad if it were illegal to collect user data (eg from users of chat-gpt) for fine-tuning, but such data collection is unlikely to fall under restrictions that digital artists are lobbying for.
Re the broader point: yes, it would be bad if we just adopted whatever policy proposals other groups propose. But I don’t think this is likely to happen! In a successful alliance, we would find common interests between us and other groups worried about AI, and push specifically for those. Of course it’s not clear that this will work, but it seems worth trying.