But we should not advocate for work on mitigating AI x-risk instead of working on immediate AI problems. That’s just a stupid, misleading, and self-destructive way to frame what we’re hoping for.
This works well for center-left progressives who genuinely believe that more immediate AI problems aren’t an issue. However a complication is that there are techy right-wing and radical left-wing rationalists who are concerned about AI x-risk, and they are also for basically unrelated reasons concerned about things like getting censored by big tech companies. For them, “working on more immediate AI problems” might mean supporting the tech company overreach, which is something they are genuinely opposed to and which feeds into the status competition point you raised.
Anecdotally, it seems to me that the people who adopt this framing tend to be the ones with these beliefs, and in my own life my sympathy towards this framing has correlated with these beliefs.
Yeah. There are definitely things that some people classify as “current AI problems” and others classify as “not actually a problem at all”. Algorithmic bias is probably an example.
Hmm, I’m not sure that anyone, techy or not, would go so far as to say “current AI problems” is the empty set. For example, I expect near-universal consensus that LLM-assisted spearphishing is bad, albeit no consensus about whether to do anything to do about it, and if so what. So “current AI problems” is definitely a thing, but it’s a different thing for different people.
Anyway, if someone believes that future AI x-risk is a big problem, and that algorithmic bias is not, I would suggest that they argue those two things at different times, as opposed to within a single “let’s do X instead of Y” sentence. On the opposite side, if someone believes that future AI x-risk is a big problem, and that algorithmic bias is also a big problem, I also vote for them to make those arguments separately.
This works well for center-left progressives who genuinely believe that more immediate AI problems aren’t an issue. However a complication is that there are techy right-wing and radical left-wing rationalists who are concerned about AI x-risk, and they are also for basically unrelated reasons concerned about things like getting censored by big tech companies. For them, “working on more immediate AI problems” might mean supporting the tech company overreach, which is something they are genuinely opposed to and which feeds into the status competition point you raised.
Anecdotally, it seems to me that the people who adopt this framing tend to be the ones with these beliefs, and in my own life my sympathy towards this framing has correlated with these beliefs.
Yeah. There are definitely things that some people classify as “current AI problems” and others classify as “not actually a problem at all”. Algorithmic bias is probably an example.
Hmm, I’m not sure that anyone, techy or not, would go so far as to say “current AI problems” is the empty set. For example, I expect near-universal consensus that LLM-assisted spearphishing is bad, albeit no consensus about whether to do anything to do about it, and if so what. So “current AI problems” is definitely a thing, but it’s a different thing for different people.
Anyway, if someone believes that future AI x-risk is a big problem, and that algorithmic bias is not, I would suggest that they argue those two things at different times, as opposed to within a single “let’s do X instead of Y” sentence. On the opposite side, if someone believes that future AI x-risk is a big problem, and that algorithmic bias is also a big problem, I also vote for them to make those arguments separately.