Competition propels us towards artificial superintelligence, as any AI firm slowing its pace risks being overtaken by others, and workers understand that refusing to engage in capacity research merely leads to their replacement.
And I agree that even if a worker values his own survival above all else and believes ASI is both near at hand and bad, then plausibly he doesn’t make himself better off by quitting his job. But given that the CEO of an AI firm has more control over the allocation of the firm’s resources, if he values survival and believes that ASI is near/bad, then is his best move really to continue steering resources into capabilities development?
The article makes this claim:
And I agree that even if a worker values his own survival above all else and believes ASI is both near at hand and bad, then plausibly he doesn’t make himself better off by quitting his job. But given that the CEO of an AI firm has more control over the allocation of the firm’s resources, if he values survival and believes that ASI is near/bad, then is his best move really to continue steering resources into capabilities development?