Thanks for writing this, this is something I have thought about before trying to convince people who are more worried about “short-term” issues to take the “long-term” risks seriously. Essentially, one can think of two major “short-term” AI risk scenarios (or, at least “medium-term” ones that “short-term”ists might take seriously), essentially corresponding to the prospects of automating the two factors of production:
Mass technological unemployment causing large swathes of workers to become superfluous and then starved out by the now AI-enabled corporations (what you worry about in this post)
AI increasingly replacing “fallible” human decision-makers in corporations if not in government, pushed by the necessity to maximize profits to be unfettered by any moral and legal norm (even more so than human executives are already incentivized to be; what Scott worries about here)
But if 1 and 2 happens at the same time, you’ve got your more traditional scenario: AI taking over the world and killing all humans as they have become superfluous. This doesn’t provide a full-blown case for the more Orthodox AI-go-FOOM scenario (you would need ), but at least serve as a case that one should believe Reform AI Alignment is a pressing issue, and those who are convinced about that will ultimately be more likely to take the AI-go-FOOM scenario seriously, or at least operationalize one’s differences with believers in only object-level disagreements about intelligence explosion macroeconomics, how powerful is intelligence as a “cognitive superpower”, etc. as opposed to the tribalized meta-level disagreements that define the current “AI ethics” v. “AI alignment” discourse.
Thanks for writing this, this is something I have thought about before trying to convince people who are more worried about “short-term” issues to take the “long-term” risks seriously. Essentially, one can think of two major “short-term” AI risk scenarios (or, at least “medium-term” ones that “short-term”ists might take seriously), essentially corresponding to the prospects of automating the two factors of production:
Mass technological unemployment causing large swathes of workers to become superfluous and then starved out by the now AI-enabled corporations (what you worry about in this post)
AI increasingly replacing “fallible” human decision-makers in corporations if not in government, pushed by the necessity to maximize profits to be unfettered by any moral and legal norm (even more so than human executives are already incentivized to be; what Scott worries about here)
But if 1 and 2 happens at the same time, you’ve got your more traditional scenario: AI taking over the world and killing all humans as they have become superfluous. This doesn’t provide a full-blown case for the more Orthodox AI-go-FOOM scenario (you would need ), but at least serve as a case that one should believe Reform AI Alignment is a pressing issue, and those who are convinced about that will ultimately be more likely to take the AI-go-FOOM scenario seriously, or at least operationalize one’s differences with believers in only object-level disagreements about intelligence explosion macroeconomics, how powerful is intelligence as a “cognitive superpower”, etc. as opposed to the tribalized meta-level disagreements that define the current “AI ethics” v. “AI alignment” discourse.