I think you are incorrect on dangerous use case, though I am open to your thoughts. The most obvious dangerous case right now, for example, is AI algorithmic polarization via social media. As a society we are reacting, but it doesn’t seem like it is in an particularly effectual way.
Another way to see this current destruction of the commons is via automated spam and search engine quality decline which is already happening, and this reduces utility to humans. This is only in the “bit” universe but it certainly affects us in the atoms universe and as AI has “atom” universe effects, I can see similar pollution being very negative for us.
Banning seems hard, even for obviously bad use cases like deepfakes, though reality might prove me wrong(happily!) there.
Thanks for engaging kindly. I’m more positive than you are about us being able to ban use cases, especially if existential risk awareness (and awareness of this particular threat model) is high. Currently, we don’t ban many AI use cases (such as social algo’s), since they don’t threaten our existence as a species. A lot of people are of course criticizing what social media does to our society, but since we decide not to ban it, I conclude that in the end, we think its existence is net positive. But there are pocket exceptions: smartphones have recently been banned in Dutch secondary education during lecture hours, for example. To me, this is an example showing that we can ban use cases if we want to. Since human extinction is way more serious than e.g. less focus for school children, and we can ban for the latter reason, I conclude that we should be able to ban for the former reason, too. But, threat model awareness is needed first (but we’ll get there).
I think you are incorrect on dangerous use case, though I am open to your thoughts. The most obvious dangerous case right now, for example, is AI algorithmic polarization via social media. As a society we are reacting, but it doesn’t seem like it is in an particularly effectual way.
Another way to see this current destruction of the commons is via automated spam and search engine quality decline which is already happening, and this reduces utility to humans. This is only in the “bit” universe but it certainly affects us in the atoms universe and as AI has “atom” universe effects, I can see similar pollution being very negative for us.
Banning seems hard, even for obviously bad use cases like deepfakes, though reality might prove me wrong(happily!) there.
Thanks for engaging kindly. I’m more positive than you are about us being able to ban use cases, especially if existential risk awareness (and awareness of this particular threat model) is high. Currently, we don’t ban many AI use cases (such as social algo’s), since they don’t threaten our existence as a species. A lot of people are of course criticizing what social media does to our society, but since we decide not to ban it, I conclude that in the end, we think its existence is net positive. But there are pocket exceptions: smartphones have recently been banned in Dutch secondary education during lecture hours, for example. To me, this is an example showing that we can ban use cases if we want to. Since human extinction is way more serious than e.g. less focus for school children, and we can ban for the latter reason, I conclude that we should be able to ban for the former reason, too. But, threat model awareness is needed first (but we’ll get there).