Judging from this, might privacy regulations be one of the best ways to slow down AI development? Privacy is a widely accepted mainstream issue, so it should be a lot easier to advocate for. I think it would be a lot easier for regular people to understand and get behind privacy regulation than DL regulation. On the other hand, it’s not neglected and therefore less important on the margin.
Why do you want regular people who aren’t qualified to get involved? I can’t think of any instance where unqualified people brought something productive to the table regarding any issue. Once you become qualified, sure, but before then, why? Qualified people will end up having to sift through garbage generated by the unqualified, thus making them less likely to continue to be engaged because it feels like a waste of time to them. You don’t need me to point out the obvious example of this do you now?
To give a short, very bad, but sort-of meaninfgful summary of my ideas: Even idiots have resources. It might help to give a concrete example of a plausible-ish archetype of something that might happen. I don’t necessarily think this exact thing will happen, but it may help to clarify what I’m thinking.
Suppose 5% of Americans would be willing to vote for political candidates based purely on their privacy regulation promises, if they were properly persuaded (or donate to privacy nonprofits, or contribute in some other way).
Privacy regulations could meaningfully restrict data access and therefore slow down the progress of deep learning capabilities.
Suppose a significant portion of those would never be persuaded of AI X-Risk arguments and would never contribute meaningfully to alignment work otherwise.
If those thee facts are true, I think it would be net positive to advocate for privacy regulation directly, rather than telling people about x-risks, since there are more peoole who are receptive to privacy arguments than x-risk arguments.Obviously this would have to require careful consideration of your audience. If you think you’re talking to thoughtful people who could recognize the importance of alignment and contribute to it, then it is clearly better to actually tell them about alignment directly.
Does this chain of thought seem reasonable to you? If not, what do you think is missing or wrong?
Judging from this, might privacy regulations be one of the best ways to slow down AI development? Privacy is a widely accepted mainstream issue, so it should be a lot easier to advocate for. I think it would be a lot easier for regular people to understand and get behind privacy regulation than DL regulation. On the other hand, it’s not neglected and therefore less important on the margin.
Why do you want regular people who aren’t qualified to get involved? I can’t think of any instance where unqualified people brought something productive to the table regarding any issue. Once you become qualified, sure, but before then, why? Qualified people will end up having to sift through garbage generated by the unqualified, thus making them less likely to continue to be engaged because it feels like a waste of time to them. You don’t need me to point out the obvious example of this do you now?
To give a short, very bad, but sort-of meaninfgful summary of my ideas: Even idiots have resources. It might help to give a concrete example of a plausible-ish archetype of something that might happen. I don’t necessarily think this exact thing will happen, but it may help to clarify what I’m thinking.
Suppose 5% of Americans would be willing to vote for political candidates based purely on their privacy regulation promises, if they were properly persuaded (or donate to privacy nonprofits, or contribute in some other way).
Privacy regulations could meaningfully restrict data access and therefore slow down the progress of deep learning capabilities.
Suppose a significant portion of those would never be persuaded of AI X-Risk arguments and would never contribute meaningfully to alignment work otherwise.
If those thee facts are true, I think it would be net positive to advocate for privacy regulation directly, rather than telling people about x-risks, since there are more peoole who are receptive to privacy arguments than x-risk arguments.Obviously this would have to require careful consideration of your audience. If you think you’re talking to thoughtful people who could recognize the importance of alignment and contribute to it, then it is clearly better to actually tell them about alignment directly.
Does this chain of thought seem reasonable to you? If not, what do you think is missing or wrong?