To be sort of blunt, I suspect a lot of the reason why AI safety memes are resisted by companies like Deepmind is because taking them seriously would kill their business models. It would force them to fire all their AI capabilities groups, expand their safety groups, and at least try to not be advancing AI.
When billions of dollars are on the line, potentially trillions if the first AGI controls the world for Google, it’s not surprising that any facts that would reconsider that goal are doomed to fail. Essentially it’s a fight between a human’s status-seeking, ambition and drive towards money vs ethics and safety, and with massive incentives for the first group of motivations means the second group loses.
On pivotal acts, a lot of it comes from MIRI who believe that a hard takeoff is likely, and to quote Eli Tyre, hard vs soft takeoff matters for whether pivotal acts need to be done:
On the face of it, this seems true, and it seems like a pretty big clarification to my thinking. You can buy more time or more safety, at little bit at a time, instead of all at once, in sort of the way that you want to achieve life extension escape velocity.
But it seems like this largely depends on whether you expect takeoff to be hard or soft. If AI takeoff is hard, you need pretty severe interventions, because they either need to prevent the deployment of AGI or be sufficient to counter the actions of a superintelligece. Generally, it seems like the sharper takeoff is, the more good outcomes flow through pivotal acts, and the smoother takeoff is the more we should expect good outcomes to flow through incremental improvements.
Are there any incremental actions that add up to a “pivotal shift” in a hard takeoff world?
I think that the big labs could be moved if the story was: “There will be a soft takeoff during which dozens of players will make critical decisions, and if the collective culture is supportive of the right approach (on alignment), then we will succeed, otherwise everyone will be dead and $GOOG will be $0.” The story of hard takeoffs and pivotal acts is just not compatible with the larger culture, and cannot be persuasive. Now there is the problem of epistemics: the soft takeoff model being convenient for persuasion reasons doesn’t make it true. But my opinion is that the soft takeoff model is more likely anyway.
To be sort of blunt, I suspect a lot of the reason why AI safety memes are resisted by companies like Deepmind is because taking them seriously would kill their business models. It would force them to fire all their AI capabilities groups, expand their safety groups, and at least try to not be advancing AI.
When billions of dollars are on the line, potentially trillions if the first AGI controls the world for Google, it’s not surprising that any facts that would reconsider that goal are doomed to fail. Essentially it’s a fight between a human’s status-seeking, ambition and drive towards money vs ethics and safety, and with massive incentives for the first group of motivations means the second group loses.
On pivotal acts, a lot of it comes from MIRI who believe that a hard takeoff is likely, and to quote Eli Tyre, hard vs soft takeoff matters for whether pivotal acts need to be done:
I think that the big labs could be moved if the story was: “There will be a soft takeoff during which dozens of players will make critical decisions, and if the collective culture is supportive of the right approach (on alignment), then we will succeed, otherwise everyone will be dead and $GOOG will be $0.” The story of hard takeoffs and pivotal acts is just not compatible with the larger culture, and cannot be persuasive. Now there is the problem of epistemics: the soft takeoff model being convenient for persuasion reasons doesn’t make it true. But my opinion is that the soft takeoff model is more likely anyway.