I’m particularly concerned with polarization. Becoming a political football was the death knell for sensible discussion on climate change, and it could be the same for AGI x-risk. Public belief in climate change actually fell while the evidence mounted. My older post AI scares and changing public beliefs is actually mostly about polarization.
Having the debate become ideologically/politically motivated seems like it wouldn’t be good. I’m still really hoping to avoid polarization on AGI x-risk. It does seem like “AI safety”, concerns about bias, deepfakes, and harms from interacting with LLMs are already primarily discussed among liberals in the US.
Neither side has started really worrying about job loss, but that would tend to be the liberal side, too, since conservatives are still somewhat more free-market oriented.
While tying concerns about x-risk with calls to slow AI based on mundane harms might seem expedient, I wouldn’t take that bargain if it created worse polarization.
I think this is a common attitude among the x-risk worried, especially since it’s hard to predict whether a slowdown in the US AGI push would be a net good or bad thing for x-risk.
Those outcomes sound quite plausible.
I’m particularly concerned with polarization. Becoming a political football was the death knell for sensible discussion on climate change, and it could be the same for AGI x-risk. Public belief in climate change actually fell while the evidence mounted. My older post AI scares and changing public beliefs is actually mostly about polarization.
Having the debate become ideologically/politically motivated seems like it wouldn’t be good. I’m still really hoping to avoid polarization on AGI x-risk. It does seem like “AI safety”, concerns about bias, deepfakes, and harms from interacting with LLMs are already primarily discussed among liberals in the US.
Neither side has started really worrying about job loss, but that would tend to be the liberal side, too, since conservatives are still somewhat more free-market oriented.
While tying concerns about x-risk with calls to slow AI based on mundane harms might seem expedient, I wouldn’t take that bargain if it created worse polarization.
I think this is a common attitude among the x-risk worried, especially since it’s hard to predict whether a slowdown in the US AGI push would be a net good or bad thing for x-risk.