My point here is that in a world where an algo-trading company has the lead in AI capabilities, there need not be a point in time (prior to an existential catastrophe or existential security) where investing more resources into the company’s safety-indifferent AI R&D does not seem profitable in expectation. This claim can be true regardless of researchers’ observations beliefs and actions in given situations.
My point here is that in a world where an algo-trading company has the lead in AI capabilities, there need not be a point in time (prior to an existential catastrophe or existential security) where investing more resources into the company’s safety-indifferent AI R&D does not seem profitable in expectation. This claim can be true regardless of researchers’ observations beliefs and actions in given situations.