Statements of the sort “we shouldn’t balance the risks and opportunities of X” are substantive only where X is closely related to a fundamental principle or a terminal goal. Since nobody really wants superhuman AGI for its own sake (in fact, it’s just the opposite: it’s the ultimate instrumental goal), “we should balance the risks and opportunities of AGI” is an applause light.
Statements of the sort “we shouldn’t balance the risks and opportunities of X” are substantive only where X is closely related to a fundamental principle or a terminal goal. Since nobody really wants superhuman AGI for its own sake (in fact, it’s just the opposite: it’s the ultimate instrumental goal), “we should balance the risks and opportunities of AGI” is an applause light.