I am mostly agreeing with you here, so I am not sure you understood my original point. Yes Reality is giving us things that for a set of reasonable people such as you and me should be warning shots.
Since a lot of other people don’t react to them, you might become pessimistic and extrapolate that NO warning shot is going to be good enough. However I posit that SOME warning shots are going to be good enough. An AI—driven bank run followed by an economic collapse is one example, but there could be others. Generally I expect that when warning shots reach “nation-level” socio-economic problems, people will pay attention.
Thanks for the reply, I think we do mostly agree here. Some points of disagreement might be that I’m not at all confident that we get a truly large scale warning shot before AI gets powerful enough to just go and kill everyone. Like I think the threshold for what would really get people paying attention is above “there is a financial disaster”, I’m guessing it would actually take AI killing multiple people (outside of a self-driving context). That could totally happen before doom, but it could also totally fail to happen. We probably get a few warning shots that are at least bigger than all the ones we’ve had before, but I can’t even predict that with much confidence.
Yes I think we understand each other. One thing to keep in mind is that different stakeholders in AI are NOT utilitarians, they have local incentives they individually care about. Given the fact that COVID didn’t stop gain-of-function research, this means that getting EVERYONE to care would require a death toll larger than COVID. However, getting someone like CEO of google to care would “only” require a half—a - trillion dollar lawsuit against Microsoft for some issue relating to their AIs.
And I generally expect those—types of warning shots to be pretty likely given how gun-ho the current approach is.
I am mostly agreeing with you here, so I am not sure you understood my original point. Yes Reality is giving us things that for a set of reasonable people such as you and me should be warning shots.
Since a lot of other people don’t react to them, you might become pessimistic and extrapolate that NO warning shot is going to be good enough. However I posit that SOME warning shots are going to be good enough. An AI—driven bank run followed by an economic collapse is one example, but there could be others. Generally I expect that when warning shots reach “nation-level” socio-economic problems, people will pay attention.
However, this will happen before doom.
Thanks for the reply, I think we do mostly agree here. Some points of disagreement might be that I’m not at all confident that we get a truly large scale warning shot before AI gets powerful enough to just go and kill everyone. Like I think the threshold for what would really get people paying attention is above “there is a financial disaster”, I’m guessing it would actually take AI killing multiple people (outside of a self-driving context). That could totally happen before doom, but it could also totally fail to happen. We probably get a few warning shots that are at least bigger than all the ones we’ve had before, but I can’t even predict that with much confidence.
Yes I think we understand each other. One thing to keep in mind is that different stakeholders in AI are NOT utilitarians, they have local incentives they individually care about. Given the fact that COVID didn’t stop gain-of-function research, this means that getting EVERYONE to care would require a death toll larger than COVID. However, getting someone like CEO of google to care would “only” require a half—a - trillion dollar lawsuit against Microsoft for some issue relating to their AIs.
And I generally expect those—types of warning shots to be pretty likely given how gun-ho the current approach is.